Talk:Number/Archive 1

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia
Archive 1 Archive 2

blackboard bold

blackboard bold does not mesh well with a paragraph. Pizza Puzzle

Eh, I'm used to seeing blackboard bold; plain bold reminds me of a variable, not a set.

I agree that we should use it; however, the current font for it is too big and doesn't mesh with the rest of the page. Perhaps, if somebody submits a slightly smaller .png we can use that. Pizza Puzzle

I plan to upload a whole bunch of <.png>s like this later this month. Of course, feel free to beat me to it. ^_^ Still we should still prefer markup that renders most directly in HTML, when such a thing works -- for now. -- Toby Bartels 05:57 12 Jun 2003 (UTC)


Could somebody produce a little Venn diagram picture showing the various number sets? I removed this verbal description of the Venn diagram. AxelBoldt 15:23, 29 Sep 2003 (UTC)


This statement is incorrect:

"Ratios of integers are called rational numbers or fractions."

In fact, ratios of integers are fractions but NOT rational numbers. The union of the set of integers and the set of fractions equals the set of rational numbers. The distinction between integers and fractions is that no less than two integers (by ratio) are required to define a fraction.

I suspect this error has been carried over to a couple of other related pages. It must be corrected.

OmegaMan

Rationals are usually defined as equivalence classes of ordered pairs of integers. Saying that they are ratios of integers is reasonable; this is only supposed to be an informal statement. A formal construction is given in the rational number article. --Zundark 08:17, 17 Nov 2003 (UTC)

Yes but the formal construction you directed me to is for the rational numbers- NOT the fractions. The presentation of the various sets of numbers is more clearly understandable if it incorporates a brief summary of their methodical construction, one built onto the next.

Where the integers have already been defined seperately, all trivial cases of fractions which equal integers should then be eliminated as redundant (i.e., those where ratios of two integers, converted to fractions, can be simplified such that the denominator is equal to one). Then, the rationals can be defined as the union of the two underlying subsets.

Note that integers require only one integer (obviously) to define themselves reflexively which is not possible for fractions.

You may think I am splitting trivial hairs. Still, the distinction I am making is reality-based and relevant. I am not just making this stuff up as I go along. It came directly from a "theory of arithmetic" textbook I own.

OmegaMan


This article is missing the ordinal/cardinal distinction for finite numbers. Although the finite ordinals are the same as the finite cardinals, the use to which they are put is different: "I have five beads" vs. "this is door number 5", and so they are conceptually different, even if mathematically equivalent. Can someone help put this distinction in the article? -- The Anome 13:54, 27 Jan 2004 (UTC)

Umpteen

I've added umpteen in the "see also" list, largely to de-orphan it. I'm not absolutely sure this is the right article to link to it, but I can't think of an alternative. Suggestions would be more than welcome. DavidWBrooks 20:01, 16 Feb 2004 (UTC)

Natural Numbers and Zero

I have never known zero to be included in the set of natural numbers (a.k.a. counting numbers, hence the exclusion of zero, as one never counts the zeroth member of a set). Rather, it is the only non-natural member of the set of whole numbers. I hope someone will correct this, or at least address the question if I am in error.

Arnold Karr

Peano would be the man to ask, but he's no longer around.John H, Morgan 16:36, 2 June 2006 (UTC)

Peano, who formulated the axioms that define natural number, identified zero as a natural number. Because of this we should clearly differentiate 'natural' from 'counting' numbers in anything that is written about either of them. Counting numbers are identical with the positive integers, but the definition of integer relies on axioms of natural number, making a definition of "counting numbers" as "positive integers" somewhat circular.John H, Morgan 09:31, 12 February 2006 (UTC)

It's very common to include 0 as a natural number (so that "natural number" and "finite ordinal" mean the same thing). It's undesirable for some purposes, however, so not everyone does it. --Zundark 07:42, 23 May 2004 (UTC)
I think it's very natural to have zero items of something (in contrast to having a negative amount of something). In fact, all of us own zero items of almost everything. It is at least as natural as the usual definition of (the number) zero as the empty set.
Of course we do not count the zeroth member of a set, but when we count something, we start out with zero items counted, before adding the first to the inventory, if there is any. When somebody asks you to count the number of apples in your pocket, you would not protest saying "I cannot count them". You would maybe say "there are none", but this is just a synonym of zero. MFH 13:31, 7 Apr 2005 (UTC)
This is, however, original research. We can only report that this is a subject on which authorities disagree. Rick Norwood 17:51, 2 June 2006 (UTC)

Mixing Numbers & Biology

I advocate the total removal of the speculative "biological basis" section, regardless of whether or not it may be wholly or partially correct. We should stick to provable information in an article involving mathematics in an online encyclopedia. OmegaMan

extensions and generalizations

The section "generalizations" should be merged into "extensions" (which could receive subsectioning). I suggest to put it after the nonstandard stuff and before the comment on abstract algebra. Please feel free to do so. MFH 18:07, 7 Apr 2005 (UTC)

Numbers

As per the discussion here, i vote to improve the GUI for the number classification. --Electron Kid 01:19, 27 October 2005 (UTC)

Improve it how? Saying "make this better, instead of worse" doesn't contribute much.
You basically have two choices: One, you can specify what you think is inadequate now, and give ideas for how it might be improved. People might discuss that with you, if they feel like it. Or, you can be WP:BOLD and take your best shot. If we don't like it, we'll put it back. --Trovatore 03:23, 27 October 2005 (UTC)

Integral domains

I've removed this:

Preserving the main ideas of "quantity" except for the total order, one can define numbers as elements of any integral domain. Unfortunately, the set of polynomials with integer coefficients forms an integral domain, and polynomials are not numbers.

I just don't buy that whether we consider something as a "number" or not has much to do with whether it's in an integral domain. The extended real numbers aren't an integral domain; the ordinal numbers aren't; the cardinal numbers aren't. Matrices in general don't form an integral domain, but nonsingular matrices do, and that doesn't make the latter numbers. --Trovatore 16:45, 27 October 2005 (UTC)

Attempted rewrite.

This is a subject I have thought about carefully for many years. I am going to attempt a rewrite, a little at a time. Rick Norwood 21:31, 5 December 2005 (UTC)

I have removed D from the introduction, but not from the charts, so it will be easy to restore if it has any defenders. All of the other sets here are well defined sets of numbers. D, defined as the set of numbers whose decimal representation terminates, is not a well defined set of numbers, because the set changes if the base changes. In base ten, the fraction 1/3 would not be in D while the fraction 1/5 would be. But if we write in base 12, then 1/3 is in D and 1/5 is not. Rick Norwood 22:29, 5 December 2005 (UTC)

"Decimal", by definition, means base ten. The set is quite well-defined, just not very interesting. I have no objection to its removal. --Trovatore 22:37, 5 December 2005 (UTC)
Good point. Rick Norwood 22:45, 5 December 2005 (UTC)

Hearing no objection to the removal of D as a not particularly interesting set, I will remove it. Rick Norwood 00:42, 11 December 2005 (UTC)

The Number template

I think for the list of constants at the bottom of the template, for a symbol for a constant to provide useful information it should have a link. Would anyone like to undertake a stub, at least, for each of the constants? I would, but I am unfamiliar with many of them. Rick Norwood 01:07, 11 December 2005 (UTC)

Mathematical Collaboration Of The Week

So this article is now the collaboration. I can't think of much to do with it. How should this be improved? --Salix alba (talk) 09:19, 1 February 2006 (UTC)

The article is now longer. I've added a few references.
I still think, as I mentioned above, somebody needs to explain the strange numbers in the box. If nobody knows what they mean, we should delete them.
It might be appropriate to list the digits from 0 to 9 in Arabic symbols.
We could get into other ways of writing numbers but I think that belongs in the article on numerals.

Rick Norwood 21:01, 1 February 2006 (UTC)

Good edits, Jon. Rick Norwood 19:20, 4 February 2006 (UTC)
  • JA: Danke. Jon Awbrey 19:32, 4 February 2006 (UTC)

Could anybody check this?

The article states that the symbol for integer comes from the german word "zahlen". Can anybody check if this is true and give reference? I'm asking this because "Zahlen" (=numbers) seems to be more likely than "(be)zahlen" (=to pay). There are also other possibilities like "zählen" (=to count)... --Cyc 17:02, 4 February 2006 (UTC)

  • JA: It should be capitalized, but German words frequently get decapitated in the process of being anglicized, for example, zeitgeist. A more specialized influence occurs when the group of integers and its many-splintered modulations are regarded as cyclic groups. Jon Awbrey 17:14, 4 February 2006 (UTC)

List of constants

Unless someone objects, I am going to remove from the list of constants all constants with red links. So, act quickly to save your favorite constant. Rick Norwood 16:09, 5 February 2006 (UTC)

  • JA: Rick, while you're at it, could you remove the notation of P for primes from the inset box. This is highly non-standard and many more writers will use P as a nonce char for "Positive integers" than anybody does for "Primes". Jon Awbrey 16:16, 5 February 2006 (UTC)

Done.

I do not know how to fix the extra space between Further Generalizations and Extensions.

I think we need more references. Rick Norwood 13:33, 6 February 2006 (UTC)

Trivia

Pfafrich -- I notice that some people use rv for reversion and others use rw. Is there any difference that I should be aware of? Rick Norwood 13:17, 6 February 2006 (UTC)

  • JA: I think you may be seeing rvv = rv2 = revert vandalism as rw. Jon Awbrey 17:08, 10 February 2006 (UTC)

Thanks. Rick Norwood 18:45, 10 February 2006 (UTC)

References & Bibliography

  • JA: I'll be rummaging through my pack for some suitable stuffing as the day/week wears on. Jon Awbrey 14:32, 6 February 2006 (UTC)

long addition to the article

I really don't think this belongs here -- most of it is about numerals rather than numbers. Also, I'm told that bullet points are considered unencyclopedic. Rick Norwood 14:30, 7 February 2006 (UTC)

Yes I know its long at the moment. I've done a big cut and past from Timeline of mathematics to get most of it in. I'm sure it could be cut down considerably, but I though its best if a complete history is there as a starting point to work with.
As to numerials its worth distinguishing material about place systems from specific numerials. Place systems are an important development. Much of first section covers 0, -1 and the roots of our decimal system from Asia. --Salix alba (talk) 15:17, 7 February 2006 (UTC)

Non(-)negative integer & Positve integer are becoming standard

  • JA: In places where the longstanding equivocality of the term "natural number" has becomes a pain-in-the-[insert your favorite anatomical location here] on a second-by-second basis, like Ninja Sloane's OEIS, it has become standard to end all the fussn'-&-fightin' by using the dab terms non(-)negative integer and positive integer. Those who miss the excitement can still go tete2tete over the hyphen in non(-)negative. Jon Awbrey 16:18, 12 February 2006 (UTC)
Oh, were it so easy! The problem, both historically and pedagogically, is that the natural numbers come before the integers, so your definiton defines the simpler concept in terms of the more advanced concept, which is, of course, then in turn defined in terms of the simpler concept. Rick Norwood 19:40, 12 February 2006 (UTC)
  • JA: This is about a current usage that avoids ambiguity — and that's a good thing. It's not about some sourcerror's apprentice game of "Quintessentially Universally Ideally Definitive Definitions In Totalizing Conceptual Hierarchies" (QUIDDITCH). Jon Awbrey 22:42, 12 February 2006 (UTC)
The problem is that there are two current usages. Each, individually, avoids ambiguity. As long as both are in use, ambiguity is inevitable. All, then, that is necessary to avoid ambiguity is for one side or the other to give in. That hasn't happened yet. Rick Norwood 22:16, 13 February 2006 (UTC)

Relatedness of items in history section

I noticed that much of the information in the history section has little to do with the content of this article. For example, is this really the page to mention the history of solutions to quadratic equations? I do think that there needs to be some discussion regarding the scope of this page before major changes are made. Grokmoo 18:15, 13 February 2006 (UTC)

Yes as pointed out above, the history section could do with a good copy edit. As to why quintics are important to the development of number: they are the lowest degree polynomials whose solutions cannot allways be expressed by radicals (algebraic expressions involving roots), hence opening the way to transendental numbers. They were also an important step in development of Galois theory, the main technique for proving transendentalism of pi and e. --Salix alba (talk) 22:17, 13 February 2006 (UTC)
Yes, but perhaps these sorts of things belong in number theory. If not, then there should at least be some mention of how these things relate to the main topic. Grokmoo 02:10, 14 February 2006 (UTC)
The definitely do not belong in number theory, that field is about properties of integers, especially primes. This article is about the different number systems and how they are related. --Salix alba (talk) 09:40, 14 February 2006 (UTC)
Number theory is not only about integers, but also about algebraic and transcendental numbers (see algebraic number theory). Of course, it doesn't necessarily follow that the connection quintics → algebraic numbers should not be mentioned here. -- Jitse Niesen (talk) 11:30, 14 February 2006 (UTC)

Minor fix

square root of to Macwiki

Imaginary unit

But actually, is not the square root of , since negative numbers can't have a square root. In reality, it should be said thus: , which is not the same thing. This is actually said in the article about the imaginary unit.

The square root function is extended from the real numbers to the complex numbers, after which the principle square root of minus one is indeed i. Still, it might be better in the article to say i squared is minus one. Rick Norwood 14:16, 11 April 2006 (UTC)

Origins of Number

Perhaps it would be worth adding some reference to theories about the origins of counting, such as that put forward in A. Seidenberg, 'The ritual origin of counting', Archive for History of Exact Sciences, Volume 2, Issue 1, Jan 1975, Pages 1 - 40. I don't have a copy of this paper lying around, so am not best placed to add the material. — Stumps 13:38, 15 February 2006 (UTC)

Ben Standeven's edit

Good work removing bullet points and shortening. I hope you, or somebody, tackles the rest of the bullet points.

In particular, what is already covered in Numeral should not be repeated here unless very briefly. Rick Norwood 13:59, 15 February 2006 (UTC)

problems with the introduction

Here is how the introduction to this important article read before my recent edit. Below, I mention some of the problems with this version.

"Originally the word number meant a count, or tally, of discrete objects. These days this application is referred to as cardinal number. Later use of the word came to include quantifying measurements of basic physical quantities like mass, length, area, volume and time.

- Mathematicians have extended the meaning of number to include abstractions such as negative numbers, transcendental numbers and the , also known as . In common usage, one may find number symbols used as labels (telephone and highway numbers) They are also used together with, or instead of, alphabet letters to indicate ordering (serial numbers and letters ). The latter usage is encapsulated in the notion of ordinal number — first, second, third, etc. + In common usage, one may find number symbols used as labels (telephone and highway numbers). They are also used together with, or instead of, alphabet letters to indicate ordering (serial numbers and letters ). The latter usage is encapsulated in the notion of ordinal number — first, second, third, etc."


"Originally the word number..." The word "number" is of relatively recent origin -- it is the meaning of the concept, not the word, that is the subject of this article.

"...referred to as cardinal number". No, ordinals are also used for counts or tallies: "first, second, third".

"...later use of the word..." we do not know enough about the origins of numbers to say definitively that counting came earlier than measuring. This is an assumption.

"...used together with, or instead of, alphabet letters..." Rather, occasionally alphabet letters are used in serial numbers, but this is a minor point, not one necessary to cover in the article.

The article, especially the introduction, needs to concentrate on the concept number. There is already an article on numeral. Rick Norwood 14:57, 18 February 2006 (UTC)

Although I am pleased to note that somebody else has taken an interest in the intro. to thia article, I do think several points in Rick's edit need addressing.

The first point seems OK to me; I was trying to find a suitable opening sentence but ended up with something akin to a dictionary definition of a word :-(

Although ordinals are used as an aid to counting, it is not necessary to use them to obtain a count. Putting sets into one-to-one correspondence with a collection of standard sets will also allow one to obtain the cardinality, a process that is akin to tallying against marks made e.g on paper with a pencil. Furthermore, mathies seem to agree that cardinals are more basic than ordinals, and yet Rick's edit has removed any reference to the former in the article.

I don't know enough Archeology to challenge the third point, but it must surely be known which one comes first in the historical records. I doubt very much if Neanderthals thought about measuring before they got to count.

The useful extension provided by addition of letters to numbers and creating "alphanumerics" is very useful in categorisation. I have added this thought to the intro. Nominal data in statistics is often coded this way, and the use of aplphanumerics even spills over into ordinal scale data utilised by statisticians.

Final point I more or less concur with, but the introduction needs to catch the reader's attention a little, too. John H, Morgan 19:32, 18 February 2006 (UTC)

There are monolithic monuments more ancient than any writing which has survived, which require both the ability to count and to measure. The earliest surviving evidence of number that I am aware of concerns counts. Sealed jars containing a number of pebbles exist. Apparently a caravan master carried the sealed jar along with his cargo, so that the merchant at the other end could break open the jar, count the pebbles, and make sure the caravan master had delievered the required number of items. These jars predate any discovered writing. But presumably measurement was also in use, if only in the form of laying off a distance using sticks of equal length. There exist very ancient 3-4-5 triangles. In any case, I'm sure we can work together to get the intro into good shape. Rick Norwood 00:49, 19 February 2006 (UTC)

Ben Standeven's edit

Good work removing bullet points. I'm going to change the present tense to the past in the material you added, though.

The difference between References and Bibliography is this, and it is essential. References are works actually used in writing the text, or works that verify statements in the text. Bibliography is much more general, and can include other works on the same subject and also related works, even if nothing in them appears in the article. Rick Norwood 23:44, 28 February 2006 (UTC)

Inadequate summary definition

A number is an abstract entity that represents a count or measurement.

...Yet, then it goes and talks about i.

I think this article needs a more comprehensive summary definition for a number.

I noticed this because I don't really know what the definition of "number" is, myself. So I can't really help, sorry. LogicalDash 02:16, 17 April 2006 (UTC)

Well the article does state that mathematics has extended the definition (i.e. a representation of count and measurement) to include negative, transcendental and imaginary numbers. Maybe we could say what these sets represent though, something like "these extensions came about as the need to solve certain equations arose, hence represent solutions to equations that no natural number is a solution of". Ugh, that's dodgy, maybe someone can formalise it a bit :)
Anyway, the point is numbers are nothing but inventions we've created as we needed them. We started by inventing numbers to describe how many apples someone has (count) or how many steps to the apple tree (measurement). Eventually it becaume useful to invent negative numbers to solve simple equations (for example, to represent debt). No-one really has -3 apples, but it is useful to represent it as so. The same thing applies to transcendental numbers and imaginary numbers. --darkliighttalk 06:49, 17 April 2006 (UTC)
My personal point of view is that mathematicians are now really really sorry they ever considered the complex numbers numbers; while the quaternions are now usually stripped of that title, it's kind of stuck for complex numbers.
(Other mathematicians disagree, of course).
If it helps at all, "number", without a qualifier, is now very rarely used by mathematicians to formally refer to anything that is not a real number; the situation is similar to the word "tea", which, without qualifier, refers to a beverage made from Camellia sinensis, but which can refer to other plants when an adjective is added to it, as in "herbal tea".
I would thus agree with the summary definition as it now stands, with the caveat that mathematics is currently un-extending the definition of "number" at least back to complex numbers. RandomP 14:09, 25 September 2006 (UTC)
RandomP, you are the first mathematician I ever saw or heard stating that (s)he didn't like to consider complex numbers as numbers. On the other hand, I've seen a number of exposures on the theme 'Complex numbers are as real as real ones' (and of course also explain this myself when teaching). I also have no idea where you got the idea that unqualified use of number except 'very rarely' refers to real number - except, of course, if you mainly are considering beginner's undergraduate texts in e.g. calculus. My experience is, that in texts written by people who are mathematicians in the first place, 'number' is not used unqualified; but the qualification often is placed early, and then cover the entire usage of the term. In a book where the scope is not as fixed (say, an introduction to 'real and complex analysis'), there may be reason to qualify each usage of the term. In an 'arithmetic number theory' context, just positive integers may do fine; while in a field like algebraic geometry, where there is a tangible difference not to work over an algebraically closed field, assuming all varieties to be complex whenever the converse is not explicitly stated may be more natural.
Writers who primarily are something else, e.g., physicists, sometimes seem to be less clear (and I must admit that sometimes so are algebraic geometers). I suspect that some physicists tend to consider numbers as more 'discovered' than 'invented', and that this makes a difference. From this point of view, and since complex numbers indeed seem to be indispensable to physics, they may glide from real to complex number without the reader noticing.
What is true, on the other hand, is that 'numbers' are not so important to distinguish in much of modern mathematics, since it is now recognised that they form just a limited set of possible basic structures. In linear algebra, e.g., 'scalars' may be elements in any fixed field, not just R or C. JoergenB 16:44, 25 September 2006 (UTC)
Sorry, should have been clearer on this: My claim might be less surprising if I rephrased it as: if the complex numbers were a new invention today, given the overwhelming connotations that people have with the term "number", another name would probably be chosen (consider "group" as another example). I don't think a mathematician talking to a lay audience would even consider saying "number" when they meant specifically "complex number", except in order to deliberately mislead their audience.
But my basic point stands: a complex number is as much of an unqualified (non-mathematical-context) "number" as a free group is an unqualified (non-mathematical-context) "group": the connotations just don't match.
I like the complex numbers as a field, particularly as a topological field. For me, the intuition of the unqualified term "number" is that it is something that is defined in analogy to a measuring process; that simply doesn't work for complex numbers, because of complex conjugation. Complex numbers require knowledge of an arbitrary convention to make sense, unless they happen to be real.
Of course, this personal point of view should be kept out of the article.
I would, however, argue vehemently (in other forums, obviously) against declaring the elements of any new mathematical structure (with a very few predictable exceptions) "numbers".
(I believe that "number of" is commonly used in mathematical writing as a synonym for "the cardinality of the set of". This is unqualified use of the term "number", but refers to cardinal numbers. As for the "very rarely" thing, I'd like to take that back. I still think it's rare for unqualified use of "number" by non-physicists to refer to complex non-real numbers.).
Of course complex numbers are "as real" as the real numbers. They're just not as numbery. ;-)
RandomP 17:24, 25 September 2006 (UTC)

Redirects

Not sure where to put this, so I put it here. Many of the x (number) pages for 3-digit numbers simply redirect to the previous 100, i.e. abb (number) redirects to a00 (number). Is this really desirable? I think it's confusing. ~iNVERTED | Rob (Talk) 22:52, 26 August 2006 (UTC)

'Decimal number' definition

I tend to use the term 'decimal number' either for a real number which is possible to write in decimal form, i.e., a rational number with least denominator dividing some power of ten; or (slightly inappropriately) for an integer in base ten representation. In the first case, 1/4 is a decimal number, but 1/3 isn't. (This sense is semantically equivalent to decimal fraction, as defined in the decimal article. However, some people dislike the term 'decimal fraction', which they consider an oxymoron.) In the second case, I would call the base ten representation 28 of decimal, but not the base sixteen representation 1C of the same number. I try to refrain from this usage, however, if I don't have confidence in my listeners' ability to distinguish numbers on the one hand and their representations on the other.

In the real numbers section, 'decimal number' 'decimal numeral' seems to be used with a rather different meaning. I ((in the first place didn't)) note that the reference is just a redirect to decimal. I would like to know if you've discussed this before, reached a consensus on this other use of the word, and decided to make an article out of it. Otherwise, I'd like to avoid the term in this context.


Please note that this may be rather confusing. I've several times encountered university students who considered 1/3 (but not 1/4) as a good example of an 'infinite number'; and this clearly came from confusing representations of numbers and the numbers themselves. JoergenB 17:17, 25 September 2006 (UTC)

Don't want to be too nitpicky, but "decimal number" isn't used in the article. "Decimal numeral" is, though things get a bit sloppy, and "decimal" is too. I'd suggest removing instances of the latter, unless you have a better idea? RandomP 17:30, 25 September 2006 (UTC)
I stand corrected! Sloppy reading... but 'decimal numeral' is not clearer to me.
Yes, I'd like to delete the term; but it might be there for a reason. If so, I'd like to hear that reason first. JoergenB 17:49, 25 September 2006 (UTC)
I like "decimal numeral" - a "string" (which might be infinite) of decimal digits, with an optional minus sign, a (non-optional) decimal point, which represents a unique real number (but several numerals might represent the same one).
It appears to me to be very close to a lay idea (I'm treating this as an article for a non-mathematical audience, BTW, just to avoid confusion) of what a real number is.
Of course there might be a better term, but I haven't heard it. (But I hear there are professional mathematicians on WP who might be able to suggest one, hint, hint)
"decimal" without the "numeral", though - totally unacceptable to me.
I'll give this a shot.
RandomP 18:43, 25 September 2006 (UTC)
By the way, feel free to stop me at any point and call a time-out for discussion. The article should be acceptable to everyone.
You ask several times for a reason behind previous editorial decisions, and whether there has been discussion: generally, if the discussion is still relevant to the article, it should be visible on the talk page (or its archives); however, you might want to try looking at the (much-vandalised, and somewhat long) history. RandomP 19:06, 25 September 2006 (UTC)

Replace "decimal numeral" by "decimal representation"?

I'm undecided about this, but I might like to replace "decimal numeral", which is a nonstandard term, by "decimal representation", which is a standard term that some think applies only to positive (or non-negative) real numbers. An alternative would be to use "signed decimal representation", or a similar neologism.

Opinions?

RandomP 19:14, 25 September 2006 (UTC)

I've checked a few things now... I notice that the 'classical (in my POV)' book "Science awakening" by B.L. van der Waerden (translated by Arnold Dresden; second English edition; P. Noordhoff ltd; no print year or ISBN given), 'numeral' is used for the single symbols only, while compounds like 'notation for numbers' and 'number systems' are used for systematic ways of representing numbers by means of symbols. E.g., the subsection "Hindu numerals" starts:

Where do our Arabic numerals 0 1 2 3 4 5 6 7 8 9 come from? Which people invented our excellent decimal positional system?

If not someone documents other, more modern usage, I'd be inclined to retain the limited van der Waerden usage of 'numerals'. JoergenB 09:46, 26 September 2006 (UTC)


Er, I'm not quite sure I understand: do you want to switch to van der Waerden's usage ('numerals' for single symbols only), or do you want to stay with the current usage (where 'numeral' refers to a string of digits, plus a decimal point and sign in some cases)?

If the latter, I think we've got all the support we need for that :-)

RandomP 20:55, 3 October 2006 (UTC)

The former. I'd like to retain the older (v. d. W.) usage, which implies changing this article; unless someone presents support for what you call 'the current usage'. In other words, is this a current usage also outside wikipedia? JoergenB 15:31, 4 October 2006 (UTC)

"should" is not a good word for wikipedia

Numbers should be distinguished from numerals, the symbols used to represent numbers

Really? Why? It's perfectly possible (if really really awkward) to define a number to be (the mathematical representation of) a symbol used to represent that number. I think that one of the things that makes mathematics hard to understand for some people is that, ultimately, the mathematical definition of most number systems is a string of symbol to be formally manipulated according to certain rules. That's not the intuition I (and, I would think, most other mathematicians) have, but it might be a bit of a letdown for people first to be told that a number isn't really a string of symbols, only to be then taught otherwise by a moderately eccentric maths professor.

I'm not sure how to fix this. "can be distinguished"? "are distinguished"?

RandomP 19:24, 25 September 2006 (UTC)


wikipedic medieval zero

For the time being I would restrict my criticism to one paragraph of this text (History of zero), namely the one concerning the so-called medieval zeros. Wikipedia says: “When division produced zero as a remainder, nihil, also meaning nothing, was used.” The Latin word ‘nihil’, possibly sometimes abbreviated to N, means ‘nothing’ indeed, but in early medieval Europe it never means ‘zero’, for in this Europe nobody did know the number zero. In Latin no word for zero existed, because the Romans did not know the digit zero, let alone the number zero. Keep in mind that in early medieval Europe division was always repeated subtraction (there were no division algorithms). Where Beda Venerabilis explains dividing 198 by 30 (using in his calculations no other numerals than Roman ones), he says first that 6 times 30 makes 180 and then that there is a remainder of 18, or that 18 is left over. But he refrains from using the number zero to tell us which remainder one obtains when dividing 210 by 30, for answering this decisive question he simply says “nihil remanet” or the equivalent “non remanet aliquid” (see “De Temporum Ratione”), meaning “there is nothing left over” resp. “there is no remainder”. So there is no reason at all to conclude Bede meant ‘zero’ by his ‘nihil’. He and his great predecessor Dionysius Exiguus simply did not know the number zero because nobody in early medieval Europe did know the number zero. Hence in early medieval Europe division produces either no remainder or a positive one, but never the number zero as a remainder.

In Faith Wallis’ standard work about “The Reckoning of Time” we find a modern version of Beda Venerabilis’ Easter cycle, with our modern numerals and with lunar epacts being 0 every nineteen years, and even with the year -1 (= the year 1 BC). But in Bede’s original manuscripts you will see no nonpositive numbers at all and find only ‘nullae’, meaning only ‘none’, or ‘nihil’, meaning only ‘nothing’, on the places where you would expect to find the number zero. There where Bede uses Roman numerals he never uses zero. And there where he enumerates Greek numerals he does not observe that there is no digit zero among them.

Zero is a digit (our tenth digit) and simultaneously a number (even the most important one). Therefore knowing the meaning of the term ‘nothing’ does not include knowing the meaning of the term ‘zero’ (if it did, Adam would be the inventor of zero). Knowing the number zero includes knowing how to use the digit zero in ones calculations. But Dionysius Exiguus and Beda Venerabilis and even Gerbert (who became pope Sylvester II in the year 999) could impossibly make use of the digit zero in their computations, because in first millennium Europe nobody did know the digit zero and no symbol (0 or something else) or word for this digit existed. Inversely they did not need the numeral zero at all because there were no algorithms available yet (DE around 520 and BV around 720) or one made do with simple algorithms in which the digit zero played no part (Gerbert around 980); keep in mind these men were no mathematicians of consequence. The only mathematician of consequence in early medieval Europe was Boetius (around 500), but even in his writings we find no trace of zero. Inventing the number zero did not happen in Europe but in India. It was the great Indian mathematician Brahmagupta who (about the year 630) was the first who not only used the digit zero in his calculations but also made explicit the most important properties of the number zero. The number zero reached Europe only around the year 1200.

I establish that wikipedic medieval zero only rests on the misunderstanding that zero amounts to ‘nothing’. So wikipedic medieval zero is nothing indeed, it does not amount to anything, it is a dummy. It is neither digit zero nor number zero, it is no zero at all, let alone a true zero. Inventing zero is much more by far than abbreviating ‘nothing’ to N. Consequently, there is no serious reason to maintain (and only serious reason to delete) the paragraph concerning the so-called medieval zeros. So I would propose to delete that paragraph or to replace it with a text compatible with reality. I hope to meet with approval, and to enter into consultation with everyone who wants to support me as well as with everyone who wants to resist. Away with the wikipedic medieval zero! :) Jan Z 12:48, 11 December 2006 (UTC)

It is not very clear to me exactly what you are saying, but then again, the article's use of "true zero" isn't very clear either. One thing is clear, however: Treating zero as a number does not at all depend having a zero digit. The article is quite clearly talking about symbols representing zero/nothing in contexts where numerals were used. This may not be treating it as a number as much as Brahmagupta did, but I don't think it should be simply deleted. 15:37, 11 December 2006 (UTC)
What DE, BV, Gerbert and everyone in before about 1200 Europe did was by no manner of means "treating zero as a number", for their "zero" was only 'nothing', with which they didn't calculate at all. My probihitive complaint against wikipedic medieval zero is that it suggests that 'zero' is identical with 'nothing' (if such was the case, Adam would be the inventor of zero). We have to distinguish between 'zero' and 'nothing'. And we have to distinguish between numeral zero (we have ten numerals) and number zero (we have infinitely many numbers). It is the standard of Wikipedia that matters. And yes, we have to delete the paragraph in question. Jan Z 03:19, 12 December 2006 (UTC)
Rewriting it to be more accurate is usually more helpful than deleting. Our ten digits (we have infinitely many numerals) are completely irrelevant in this context, so your repeated mention of them (as "numerals") makes it very hard to understand your point. We clearly have to distinguish between the number zero and the digit 0, but it is you trying to connect them, not the article. JPD (talk) 14:39, 12 December 2006 (UTC)
First of all I have to adjust my terminology (which was based on books out of the twentieth century). I would propose to defer the discussion until I have done that assignment today. With my “our modern numerals” I had the symbols 1,2,3,4,5,6,7,8,9,0 in mind; it is permitted to refer to these ten symbols by means of the term ‘our modern digits’ instead of “our modern numerals”, I suppose. May I ask you (JPD) by which analogous term we may refer to the particular Roman numerals I,V,X,L,C,D,M (seven in number), e.g. ‘the Roman digits’? Jan Z 10:11, 13 December 2006 (UTC)
I don't know of any word other than "symbol" to describe symbols used in non-positional numeral systems. I think "digits" is reserved for the symbols that are used in a fundamentally different manner in positional numeral systems. However, I don't see why we need to refer to them at all. The subject of the paragraph we are discussing is zero as a number, a notion which is separate from, and does not depend on, use of a zero digit or symbol in forming numerals for other numbers. JPD (talk) 17:26, 13 December 2006 (UTC)
To correct my first text (the big one) I had only to replace the word ‘numeral’ with the word ‘digit’ (ten times), which I did by then. In the three cases I used there the word ‘numerals’, replacing was not needed. To correct my second text I had only to replace the word ‘numeral’ with the word ‘digit’ (one time) and the word ‘numerals’ with the word ‘digits’ (one time), which I did by then.

Let us return to the question of thewikipedic medieval zeros now. The point is that discovering the number zero occurs in three phases : 1 using a symbol (e.g. 0) or word (e.g. nihil) meaning ‘nothing’ or ‘none’ or something similar only as a placeholder, 2 using a symbol (e.g. 0) or word for the digit zero within the framework of systematic calculations (as until recently pupils did carrying out a long division), 3 using a symbol (e.g. 0) or word for the number zero within a mathematical framework on some (not necessarily high) level of abstraction. It is not advisable to speak of ‘knowing zero’ in the case of someone (e.g. a child) saying “there is nothing in our letterbox” or “our letterbox is empty”. ‘Knowing zero’ means ‘being at least on the second level of abstraction’, which implies that in early medieval Europe nobody knew zero, which implies that wikipedic medieval zero is no zero at all (even when written as 0). Jan Z 18:13, 13 December 2006 (UTC)

Whether you use "digit" or "numeral" is a very minor issue. The problem is that you are even mentioning digits, and claiming that an idea of a number zero depends on using a numeral as a placeholder or digit in calculations, which is simply not true. I suspect that what you mean by "placeholder" above is different from what the article means. I agree that the use of "none remain" is questionably a zero, but you don't seem to have provided a reasonable definition of what would count as treating zero as a number. JPD (talk) 15:14, 14 December 2006 (UTC)
Whatever reasonable definition we may employ, the zerolike things DE and BV did, can by no means be considered as treating zero as a number (this is the point indeed), but only as mentioning certain divisions give no remainder. In this case it is a matter of no more than what historians of mathematics call a placeholder for zero. The problem is that the article means that something like a placeholder for zero is considered as zero itself. It is a simple truth that one cannot understand the meaning of the number zero if one has no experience with calculations in which zero plays an essential part. There is nothing from which we can deduce DE and BV knew zero. So there is no reason at all to abandon the current opinion that nobody in early medieval Europe knew zero. Jan Z 23:25, 17 December 2006 (UTC)

Jan Z, several writers imply that "nothing" becomes "zero" whenever it is or can be associated with other numbers. Although your examples "there is nothing in our letterbox" and "our letterbox is empty" do not explicitly involve numbers, they imply them because only discrete items which can be counted are placed in a letterbox (letters and packages).
A) Zero is a word for "no objects".[1]
B) "Some concept of the number zero was probably in use as early as human beings first began to do arithmetic, so it's impossible to trace its origin."[2]
C) "The number zero was probably discovered almost as soon as numbers were discovered. 'Zero' is after all synonymous with 'none'."[3]
D) "Zero must be distinguished from nothingness (null)" is clarified by the examples: "A person's grade in a course he never took is no grade or nothing. But he may, however, have a grade of zero. Or if a person has no account in a bank, his balance is nothing. On the other hand, if he has a bank account, he may very well have a balance of zero."[4]

The medieval zero quite explicitly involves numbers. A remainder of nihil indicated by both Dionysius and Bede is zero because it is the result of an arithmetic calculation. Furthermore, the inclusion by Dionysius of nulla within his column of epacts alongside Roman numerals (nulla, xi, xxii, iii, …) is the clearest possible statement that he included it in the domain of numbers.[5] Argument 9 requires the addition of an epact to another number in order to calculate the Easter moon, hence the additive identity n+0=n was required whenever the epact was nulla (zero). This is a significant characteristic of your abstract zero.

Just to emphasize this view, one author claims that Egypt used a symbol for zero (the triliteral hieroglyph nfr) in 2600 BCE for a level reference line and in 1700 BCE in bookeeping for a remainder of zero.[6] I don't completely agree with the author that this was a symbol—although it may look like a symbol in hieroglyphs, its appearance would be drastically different when written in hieratic or demotic script. Nevertheless, it certainly was a word. — Joe Kress 07:13, 18 December 2006 (UTC)

Thanks, Joe. The details you provide concerning Dionysius satisfy me that it is fair to call his nulla a number on any relevant level. I am still not quite sure what distinction Jan is making between zero and a placeholder for zero. The "placeholder" in the article seems to be referring to a zero digit used in a positional numeral system, not the number zero. JPD (talk) 13:17, 18 December 2006 (UTC)
Joe Kress, thanks for your detailed contribution. Let us try to make terms. The implication you staged in your first paragraph is a product of your modern mind, it is no iron logical implication, for a two year old child or a cat does not relate an empty plate with discrete items which can be counted. This as such does not exclude of course that someone like BV (around 700) relates “no remainder” with cases in which the remainder is a positive integer. But the point is that in early medieval Europe ‘nothing’, even when notated with o or 0 or ‘nulla’, never was interpreted as a number (it seems that the same can be said even of the number one). Gerbert (around 1000) probably knew early European symbols for the Hindu-Arabic digits 1,2,3,4,5,6,7,8,9, which in his time had reached Spain, but if he knew them, no zero was included, and he did not use zero in his calculations. It’s easy for us, with our modernized brains, to talk, sometimes we neglect simply the (nevertheless real) distinction between ‘nothing’ and zero, at a lower level of education temporarily using ‘zero’ as a word for ‘no objects’ in order to introduce the concept ‘zero’. But we have to realize that sometimes we simplify (sometimes in an inadmissible extent) and that BV had no choice (for he knew only one of these two different concepts).
[1] An interesting example. We don’t use zero in counting. Historically the discovery of zero comes long after the one of the positive integers. Where the first discovery of (counting with) positive integers lies long before (let us say) the year -10000, the one of zero lies somewhere between the years -4000 and 1000. Both discoveries did not happen from one day to the next, in both cases we have to do with a very evolutionary proces of becoming conscious. It is not easy to define at which “moment” (for the first time) someone became (or becomes) conscious of the fact that ‘nothing’ can be represented by a number (with a certain unique property but nevertheless a number like the positive integers known to him). In my opinion the only reasonable and ultimately workable criterion for defining that moment (insofar as this is possible) is the moment at which someone starts using zero when carrying out calculations (see my three levels of abstraction above).
[2] "Some concept of the number zero was probably in use as early as human beings first began to do arithmetic, so it's impossible to trace its origin.” agrees with my comment on [1].
[3] I disagree with this quotation.
[4] I agree with this quotation.
[5] If a person has no account in a bank, his balance is nothing, not zero. In such a case we would use a hyphen instead of a zero. And this is what DE and BV do: in their opinion sometimes there is no epact, namely when in the calendar year in question on 22 March, the earliest possible date of Easter Sunday, the moon is not visible (for it is new moon then). It is DE who set the trend. In the Easter table attributed to bishop Kyrillos of Alexandria (around 400), which was the Easter table departing from which DE constructed his one, we see epacts of 30 instead of “nulla”. DE altered epact 30 to “nulla”. Dionysius’ ‘nulla’ and Beda Venerabilis’ ‘nullae’ in their columns of epacts stand for ‘nothing’, not for ‘zero’. Nevertheless Dionysius’ Argumentum IX is very interesting indeed. Evidently in this case DE has either the year 525 or the year 620 in mind, in which year the epact was 12. In this case he says somewhere “adde epactas”, meaning in this case ‘add 12’. Unfortunately he does not tell us what to do in the case that the epact is “nulla”, but ondoubtly on being asked he would have answered “of course in this case there is nothing to add” or “of course in this case we have nothing to add”. For saying “adding nothing”, for us not different from “adding zero”, DE has no alternative. What he does is by no means inventing something like the formula x + 0 = x. At the most he says that adding nothing to something produces no change (but undoubtedly he was not the first saying so). Neither Argumentum III (De epactis) of Argumenta Paschalia can give us a decisive answer. But as long as we do not dispose of such a decisive answer wikipedic medieval zero remains an empty concept. Yes, it is the result of an arithmetic calculation, but it is only the result of an arithmetic calculation, meaning no more than ‘nothing’, and it is by no manner of means an essential element of arithmetic calculation. So "the inclusion by Dionysius of ‘nulla’ within his column of epacts alongside Roman numerals (nulla, xi, xxii, iii, …)" is inconclusive for arguing that wikipedic medieval zero is a real zero. So there is no reason at all to abandon the current opinion that nobody in early medieval Europe knew zero.
[6] Concerning the triliteral hieroglyph nfr: “Nevertheless, it certainly was a word”, according to you. Assuming it was a word, we want to know what is the meaning of that word. Perhaps it means something like ‘basic level’ (both in land surveying and in keeping the books), which for us (with our modernized brains) means something like ‘level zero’. The old Egyptians had only something like ‘basic level’ in mind, not the number zero, I think.
To round off: compare (and attend to word order in the second short sentence) what Bede (around 700) says in his “De Temporum Ratione” after having carried out a division:
when the remainder is 1 he says “remanet i”;
when the remainder is 0 he says “nihil remanet” and in one case even “non remanet aliquid”.
Possibly it is the connection between these three short sentences which can be a deciding factor for our discussion, for in my opinion this connection makes clear that Bede’s ‘nihil’ is only ‘nothing’, and not ‘zero’.
JPD Indeed, usually historians of mathematics mean by "placeholder" a zero digit used in a positional numeral system. But in my "In this case it is a matter of no more than what historians of mathematics call a placeholder for zero" the word "placeholder" must not be interpreted in that particular context. It would have been better to say "no more than something similar to what" instead of "no more than what". Jan Z 07:23, 20 December 2006 (UTC)
I really don't have time for a full reply, but Jan, you still don't seem to me to have given a compelling idea of what "treating zero as a number" means so that the examples given don't qualify. (For one thing, you still keep mentioning digits, which are completely irrelevant.) Apart from that, this is a Wikipedia article, so we shouldn't be arguing about whether it is a number according to our own ideas (original research), but including in the article sourced information on what they did (true zero or otherwise), and either not saying anything about whether it was a true zero, or relying on reputable sources to say something one way or the other (verifiability). JPD (talk) 17:29, 4 January 2007 (UTC)

JPD, thanks for your reaction. Yes indeed, the difference between the number zero and the digit zero is not relevant in the discussion. But what I meant was that one cannot know zero without experience in calculating with zero. Further, we have to try to eliminate a misunderstanding between you and me concerning the concept of ‘number’. Firstly, we have to distinguish between the term ‘number’ in compounds like ‘number of apples’ and the mathematical, abstract concept. Secondly, in my opinion, your "treating zero as a number" betrays the misconception that ‘nothing’ and ‘zero’ should be one and the same thing. But we have to distinguish between ‘nothing’ and ‘zero’. A child saying “5 apples minus 5 apples gives no apples” does not necessarily know zero (Adam did not know zero, but he did know the meaning of “5 apples minus 5 apples gives no apples”). Shortly, your "treating zero as a number" is an inadequate formulation of my ‘treating nihil as zero’, which only can mean ‘calculating with nihil as if it were the (abstract) number zero’. So the discussion concentrates on the question whether somewhere in early medieval literature ‘nihil’ is not only ‘nihil’ but also a real ‘zero’ (i.e. a number zero according not so much to my or your “own ideas”, but according to the usual, modern, mathematical definition of the concept, of course, and known to you, I suppose). Using my first convincing reputable source, namely Dionysius Exiguus’ comment on his own Easter table, we are able to verify (see e.g. above, or the third one of my following remarks) that, concerning DE, the answer is no (for analyzing the cases in his comment where ‘nihil’ seems to be a real ‘zero’ it proves to be only ‘nihil’). And analyzing my second convincing reputable source, namely Beda Venerabilis’ comment on his own Easter cycle (see his “De Temporum Ratione”), we can verify (see above, or the second one of my following remarks) that, concerning BV, the answer is no again. But if my simply verifiable arguments do not convince you, for the sake of the wikipedic neutral point of view it is you who has to argue for what reason Wikipedia wants to abandon the current opinion that nobody in early medieval Europe knew the number zero. So I would invite you to tell where in early medieval literature such a ‘nihil’ proves to be a real ‘zero’ or to tell to which convincing reputable source (outside Wikipedia of course) you would refer.

The following additional and summarizing remarks are intended for everyone who wants to participate in this discussion, in particular for JPD and Joe Kress:

1 In view of its definition (zero is a name of our tenth digit, mostly indicated with the symbol 0, as well as of the number 0 with the unique property that x + 0 = x) the term ‘zero’ has by no means the same meaning as the Latin word ‘nihil’ (meaning ‘nothing’) or as the Latin word ‘nulla’ (meaning ‘none’). So someone saying “5 apples minus 5 apples gives no apples” does not necessarily know zero. But knowing zero includes calculating with zero.

2 In early medieval Europe no division algorithms were available yet and division boiled down to repeated subtraction. There where Beda Venerabilis in his important book “De Temporum Ratione” about time reckoning explains dividing 725 by 19 (just like Dionysius Exiguus using no other numerals than Roman ones) he says first that 19 times 30 makes 570 and that 19 times 8 makes 152 and then “remanent iii”, meaning that the remainder is 3. But he refrains from using the number zero to tell us which remainder one obtains when dividing 910 by 7, for answering this decisive question he says, after having noted that 7 times 100 makes 700 and that 7 times 30 makes 210, simply “nihil remanet” or the equivalent “non remanet aliquid”, meaning “there is nothing left over”: no trace of zero (just like DE). Moreover, he never calculates with ‘nihil’ as if it were a number. And there where he enumerates Greek numerals he does not observe that there is among them no symbol or word for zero. There is nothing from which we can deduce he knew the number zero. His ‘nihil’ means only ‘nothing’ indeed.

3 There is nothing in his arguments from which we can deduce that Dionysius Exiguus knew zero or calculated with ‘nihil’ as if it were a number. There where we say that the epact is 12, DE says “duodecim sunt epactae”, which literally means “twelve are the epacts”, i.e. “there are twelve epacts”. The interesting question is what DE says there where we would say that the epact is 0. In that case he says (Argumentum XIV) “Anno primo, quia non habet epactas lunares, …”, which means “In the first year, which does not have lunar epacts, …”. This clearly implies that the meaning of the Latin word “nulla” in his column of epacts is “there are no epacts” (in which case we should better put a hyphen instead of a zero): no trace of zero. Still in the same sentence he tells us furthermore how every “nulla” in his column of epacts is connected with the epacts of the previous calendar year, by means of something like our calculating modulo 30, where however ‘30 epacts’ is congruent to ‘no epacts’ modulo ’30 epacts’, since for the calendar year in question he establishes “nihil remanet de epactis”, which means “nothing remains from the epacts”. But as long as one is calculating with epacts as children calculate with apples we cannot speak yet of ‘knowing zero’. It is not necessary to know ‘zero’ to understand that 5 apples minus 5 apples gives no apples. The interpretation of Dionysius’ “nulla” as a zero is ours, not his. Where DE sees purely and simply a column of mutually related separate “numbers” of epacts such as ‘no epacts’ or ‘11 epacts’, it is our modernized brain which thinks to see a mathematical structure (a sequence x1, x2, x3, …) of pure nonnegative integers. And there where DE or BV is explaining his calculating with (abstract) positive integers, as soon as zero comes into sight (i.e. enters our field of vision) he lapses into a less abstract terminology. Though DE and BV used ‘nothing’ or ‘no epacts’ as the result of an arithmetic calculation, they did not use it in their calculations (they had no algorithms to their disposal and their calculations were supported only by abacuslike aids); this is of crucial importance, for it means that they did not yet become conscious of the fact that ‘nothing’ can be represented by an abstract number like the positive integers known to them. So they did not in the least include their “nulla” or “nullae” in their set of numbers, which consequently would always be no more than the set of positive integers.

4 There is a connection between Dionysius’ epact “nulla” and Newmoon (after all epacts indicate the “age” of the moon on 22 March, the earliest possible date of Easter Sunday). For in every first year of the nineteen year cycle, i.e. every calendar year “which does not have lunar epacts”, not only holds that “there are no epacts” but also that there is no visible moon on 22 March (for it is new moon then). In the view of DE, in any calendar year in which a new moon is born on 22 March (once every nineteen years) there are no epacts yet, and not surprisingly it is this what he says. In all probability it is for this reason that DE altered epactae “XXX” to epactae “nulla”.

5 For Dionysius Exiguus en Beda Venerabilis as well as for us ‘adding nothing’ boils down to ‘doing nothing’. But to be able to conceive refraining from any action (namely adding nothing) as a special case of adding something (namely adding ‘zero’) it takes more than some skill in carrying out calculations with positive integers. Keep in mind that DE and BV were no mathematicians and that nobody in early medieval Europe was able to make that decisive step up to a higher level of abstraction.

6 In short, wikipedic medieval zero is a product of our modernized brain, it is no real zero. It is our modernized brain which tries to hoax us into believing to see the number zero where by early medieval scholars only ‘nothing’ was meant.

Something will have to be done about the wikipedic medieval zeros, not only in this Wiki item [Number], but also in the Wiki item [Dionysius Exiguus]:

1 In this Wiki item [Number] I demur at the terms ‘true zero’, ‘medieval zeros’ and ‘true zero symbol’ in the paragraph concerning DE and BV, because Dionysius’ and Bede’s ‘nothing’ is no zero at all (see above). I would recommend to delete the whole paragraph. This is the best option, because its significance for mathematics is nihil. An alternative option could be to replace the first three words with “An imperfect zero”, “but” with “only”, “medieval zeros” with “medieval “zeros””, “a true zero symbol” with “in all probability only an abbreviation for ‘nihil’ and no true zero symbol, because it was never used in calculations”.

2 in the Wiki item [Dionysius Exiguus] I demur at the suggestion that DE should know the number zero. For the sake of the wikipedic neutral point of view it is the editor of the paragraph in question (Joe Kress?) who must argue for what reason Wikipedia wants to abandon the common assumption that nobody in early medieval Europe knew the number zero. In that paragraph we find only one argument in favor of his opinion, but in my opinion it is a very weak one (“Zero can probably be found in earlier Latin mathematical treatises”). To begin with, I would invite him 1 to take cognizance of my arguments above (which arguments, I think, demonstrate the impropriety of his subjective interpretation of Dionysius’ ‘nulla’ and ‘nihil’), and 2 to formulate his arguments or to reconsider his particular point of view. It were much better to let ‘nulla’ untranslated or to put a hyphen instead of it than any subjective interpretation of it as a zero. I look forward to receiving his reply. Jan Z 21:20, 9 January 2007 (UTC)

Today 'knowing zero' means 'knowing nothing', I am afraid; please distinguish this from knowing 'zero', which implies knowing how to calculate with the number zero. Jan Z 16:38, 12 January 2007 (UTC)

Jan, I'm not sure what "knowing" a number means. The article talks about using the number zero, not "knowing" it. Apart from that, you make some good points, and it would be interesting to discuss some things such as what you mean by "algorithm", but I am afraid that you have misunderstood my main point, which is that this is not the place for your arguments, my arguments or Joe or anyone else's arguments about whether they were using the number zero. What you call "verifying" that the authors under discussion did not use nihil as a real zero is actually original research. In fact, some people would call it original research even if you had given sources for what the "usual, modern, mathematical definition of the concept" is, which you haven't. To say anything in the article(s) about whether they used a "true zero" we need to provide references not to their text, but to works stating that they did or did not use zero.
One of your solutions is to delete the whole paragraph, which would definitely remove any original research, but is rarely helpful. The question of whether the content of the paragraph is worthy of inclusion is one in which arguments and something like "original research" is necessary, and I fail to see why you say the use of nihil in the Middle Ages is insignificant. It is only insignificant to the history if it is no different to anything that had been done previously in that part of the world. Otherwise, a description of what they did do, whatever we call it, is appropriate. JPD (talk) 18:22, 12 January 2007 (UTC)

JPD, of course “knowing ‘zero’” = “using the number zero”, which implies knowing how to use it carrying out calculations. But DE and BV did not use their ‘nihil’ carrying out calculations. So their ‘nihil’ is no zero at all, let alone a true zero. But for the sake of the wikipedic neutral point of view it is you who has to argue why Wikipedia should abandon the current opinion that nobody in early medieval Europe knew the number zero. So in the first place I would repete my request to tell where in early medieval literature such a ‘nihil’ proves to be a real ‘zero’ or to tell to which other convincing reputable source (outside Wikipedia of course) you would refer. I look forward to receiving your reply. Jan Z 21:25, 13 January 2007 (UTC)

I would propose to continue the discussion concerning wikipedic medieval zeros in Wiki item Dionysius Exiguus at that place. Jan Z 08:05, 14 January 2007 (UTC)


Of course? To me the phrase "knowing zero" doesn't have any meaning at all, but aside from that, you are missing the point again. It is not me, you or anyone else that has to argue anything. We either need to tone done the wording of that paragraph so that it says nothing more than that DE and BV used nihil and nulla in a certain way, or give sources saying whether or not it should be considered a true zero. Looking at the early medieval literature ourselves is not enough to say something like that. Since you keep referring to the "current opinion", I was hoping that you would give sources showing that this is indeed the current opinion of most historians of mathematics. (Surely this fact, if true, is relevant to the article and should be included?) You are right to insist that the current text be backed up by a source, but surely giving relevant sources to correct it is better than simply demanding deletion? JPD (talk) 20:00, 14 January 2007 (UTC)


JPD, the only convincing reputable sources I saw about the subject were explicitly or implicitly stating that DE and BV did not use zero: D.E.Smith, B.L.vanderWaerden, D.J.Struik, G.Declercq, and my own investigations made me clear that they were right. I never saw any convincing source stating that they did use zero. For that reason we have either 1 to delete or 2 to adjust the paragraph in question. If we choose the second option the most important thing we can mention about the subject is the fact that DE and BV did not use zero (in connection with the fact that they used only abacuslike aids and no algorithms at all). Jan Z 00:34, 16 January 2007 (UTC)

For my interest, it would be good to quote exactly what these authors said on the matter. Of course, you wouldn't actually need to do that to alter the paragraph, but you would need to be more precise. For example, in what publication does Struik talk about use of the number zero in medieval Europe? The important things to mention, in my opinion, are what DE and BV did do, mentioning how this has been contrasted with those before and after them. You suggested changes above, include saying "an imperfect zero", but I think "imperfect" is a much too loaded term to explain that it wasn't fully what we call zero today. Apart from that, I still dispute the idea that abacuslike aids and "algorithms" have anything to do with it. JPD (talk) 11:23, 16 January 2007 (UTC)

Owing to my sources, either implicitly (D.E.Smith: History of Mathematics, B.L.vanderWaerden: Science Awakening, D.J.Struik: a concise History of Mathematics) or explicitly (G.Declercq: Anno Domini), DE and BV did not use the number zero. The only sources for what they did do are DE' own "Argumenta Paschalia" and BV' own "De Temporum Ratione". It is not difficult for us to convince ourselves (by objectively analyzing their texts) that they were no exception to the generally accepted rule that in early medieval Europe nobody used the number zero. It is our modernized brain which tries to hoax us into believing to see the number zero where by early medieval scholars only ‘nothing’ was meant. Jan Z 07:13, 17 January 2007 (UTC)

In several places Faith Wallis translates Bede's nulla as "zero" in Bede: The Reckoning of Time: "In the first year of the 19-year cycle, in which the epact is zero" (chapter 20, page 64); "But then the Egyptians, to whose judgement the catholic Church now gives its consent, fix the change in the first year of the 19-year cycle, making the annual lunar epact, whose locus is the 11th kalends of April [22 March], to jump from 18 to zero [in nullam]." (chapter 42, page 114); and "In the first year of the Paschal cycle the epact is zero" (chapter 46, page 125). Decisive to this discussion is Bede's quote of Dionysius Exiguus, "And because [the epacts] run out at the end of 30 days, the epact of zero is placed at the beginning of this cycle, but the second year takes an epact of 11." (chapter 42, page 115) where Wallis (having a modern mind) indicates that DE used "zero". (I emphasized all uses of zero.) Of course, there is "Appendix 2: Bede's 532-year Paschal table" on pages 392-404 (already mentioned and rejected by Jan Z) where the first epact of every 19-year cycle under "lunar epact" is "0". These are virtually identical to their use by DE (except that the latter listed only five or six cycles). These instances of the use of zero or 0 by Bede (and DE) are sufficient for inclusion in the article. The symbol N was used for nulla by Bede or one of his colleagues in an epact table (Corpus Christianorum, Series Latina, CXXIII C, p.550), but this is obviously an isolated use, unknown otherwise. Nevertheless, this is similar to the use of the Roman numerals C for 100 (from centum) and M for 1000 (from mille).
Additional relevant information is provided by Otto Neugebauer: Contrary to Jan Z's statement (which I have also seen in other sources) that DE changed the Alexandrian epact of 30 to "nulla", we have tables totally independent of DE which show that the first Alexandrian epact was 0. They are in the Ethiopic computus (Otto Neugebauer, Ethiopic astronomy and computus) where Neugebauer always uses "0" whenever the Ge'ez/Amharic script used the abbreviation 'do for bado (zero). One of the tables is an Ethiopic copy (now in Jerusalem) of an Alexandrian Easter table for the Diocletian years 27-85 (AD 311-369) which agree with the data from Athanasius' letters for the years where they overlap, except for D 49, which clearly has a scribal error. Furthermore, Neugebauer notes in A History of Ancient Mathematical Astronomy that these same epacts appear in the Small commentary of Theon of Alexandria on the Handy Tables of Ptolemy (c.375). Although I have not consulted the latter, Neugebauer states in The Exact Sciences in Antiquity that both Ptolemy and Theon used a symbol for zero similar to o macron (ō) (see History of Mathematical Symbols: Zero). This implies that DE used the closest Latin designation that he had available to him, nulla, when he was translating the Greek Alexandrian zero into Latin.
Jan Z: Regarding the Egyptian nfr: It does not matter what you or I think the Egyptians meant. It is sufficient that the cited author calls it a symbol for zero for it to be called zero in Wikipedia. This is by far the earliest known use of zero. — Joe Kress 08:21, 17 January 2007 (UTC)
Joe's sources are quite helpful. It would be good if some references were added to the article. Of course, some of his claims could be looked at in a slightly different way, but I am now finding it hard to take Jan Z seriously. As well as repeatedly referring us to "objective" original research which simply imposes his own views, he claims that there is a "generally accepted rule that in early medieval Europe nobody used the number zero". I have only ever seen this claim concerning the zero digit, not the number, and one of Jan's sources (Struik's "A Concise History of Mathematics") that supposedly "implicitly" confirms that they did not use zero barely touches on the number zero, discussing "zero" only in the context of positional numeral systems. I don't think mention of the Hindu śūnya (and comparison with Aristotle's kenos) can be considered an implicit claim that DE and BV did not use a zero. JPD (talk) 13:16, 17 January 2007 (UTC)

I’m sorry, I didn’t realize that one would use the term ‘zero’ not only for the digit or the number itself but also for e.g. something like a placeholder for zero. This implies that ‘zero’ as such is a far from clear concept. In this way there are many ‘zeros’, contrary to the (unique) number zero itself. Where e.g. B.L. van der Waerden is speaking about Ptolemy’s omicron o, he does not say it is the number zero, but clearly he considers it to be (something like) a placeholder for zero, for he says carefully “for zero he has the symbol o”. But where he or D.E. Smith or Otto Neugebauer or Dirk Struik or Faith Wallis uses “zero” or even “0”, more often than not they are loosely speaking indeed and mean only (something like) a placeholder for zero; anyway, being conscious of the relativity of the meaning of the term ‘zero’, they never claimed (to my knowledge) something like an ancient or like an early medieval number zero. To keep our discussion significantly we have to speak strictly and are compelled to distinguish severely any ‘nothing’ (either as such or as something like a placeholder for zero) from the (abstract) mathematical concept of the number zero itself, and abandon the use of the vague bare term ‘zero’. It is the status of DE’s “nulla” and the one of BV’s “nullae” we are talking about, and I realise that we have to tone down not so much the (vague) qualification “zero” but the qualifications “a true zero” and “the number zero” for them.

JPD, look in Wiki item [Dionysius Exiguus]: there we meet the (JK’s?) phrase “rendering false the common assumption that the number zero was unknown in Western Europe until its symbol (0) was obtained”, which implies that it is a common assumption that in early medieval Western Europe the number zero was unknown. And if you want to get an impression of Dirk Struik’s opinion about something like an early medieval zero in Europe, I would recommend you to read the first four sections of chapter 5 (“the beginning in Western Europe”), not only chapter 4 section 3. The content of that chapter 5 makes clear why the number zero was unknown to early medieval scholars (Boetius was the only exception to the rule that in early medieval Western Europe knowledge of mathematics was no more than elementary calculating with positive integers, but the number zero was unknown to him too). DE’s “nulla” and BV’s “nullae” were ‘nothing’, either as such or as something like a placeholder for zero, but certainly not the (abstract) number zero itself, which (including its purely mathematical context) came to Europe around the year 1200.

Only in special cases ‘nothing’ can be considered as the (abstract) number zero; in these cases there must be a purely mathematical context which boils down to calculations with abstract numbers including a symbol or word which acts the part of ‘zero’. There is nothing from which we can deduce that DE’s “nulla” would be the number zero, for DE did not say explicitly somewhere something like 4 + nulla = 4 (which would give evidence of the insight that adding nothing is a special case of adding something). Saying something like 4 epacts + no epacts = 4 epacts or like 4 epacts – 4 epacts = no epacts = ‘nothing’ is not enough, even saying 4 – 4 = nulla is not enough (to be familiar with the number zero, calculating with abstract positive numbers and a symbol or word which acts the part of the number zero is required).

Joe Kress: to defend your claim that DE and BV should have been acquainted with the number zero (which claim needs a “cited author” indeed) you refer to FW (she is, as we are, a child of our time after all). But it is not the number zero itself of which she speaks, but only any ‘zero’. Moreover, where FW indicates (loosely speaking) that DE and BV used ‘zero’, this interpretation is only based on her own in fact too free translation. BV’s “nullae sunt epactae” (literally “there are no epacts”) and “nulla epacta” (literally “no epact”) need a more objective translation than hers. And so does BV’s “de octaua decima in nullam facere saltum”. For our modern (updated) brain it is difficult to read here something else than “to jump from 18 to 0”. But even modern people use phrases such as “jump into nothingness”. The interpretation of “no epacts” as ‘zero’, is FW’s, not Bede’s. And the interpretation of Bede’s “nullae” as the number zero is yours, not hers. Since she gives no further argument (being conscious of the relativity of the meaning of the term ‘zero’), FW can not make a decisive contribution to our discussion. Neither the isolated abbreviation N for BV’s “Nullae” is decisive (by the way: in the same table of epacts we find epact 30 instead of epact “nullae” again). Shortly, analyzing them (what I did above), always DE’s “nulla” and BV’s “nullae” will be proved to be nothing else than ‘nothing’ (either as such or as something like a placeholder for zero). At most they could considered to be precursors of the number zero, perhaps.

Although DE’s “nulla” and BV’s ‘nullae’ are very zerolike things indeed, neither DE nor BV was familiar with the number zero, for 1 in early medieval Western Europe the number zero was unknown (common assumption), 2 DE’s “nulla” and BV’s ‘nullae’ mean only ‘no epacts’ (see my contribution above), 3 DE and BV did not use “nulla” or “nullae” in their abstract calculations (see my contribution above), and 4 even in the case that JK’s interesting suggestion that the ps.-Cyrillan Easter table (which was used by DE to construct his one) contained epact omicron o instead of epact 30 is correct, DE’s “nulla”, according to JK’s own interpretation being a “translated” omicron o (being after all something like a placeholder for zero) as well as according to my interpretation that not only DE did not “translate” this symbol but also simply did not accept it because he had not the slightest need for any symbol and contented himself with his familiar “nulla” instead, would be at most ‘nothing’, either as such or as something like a placeholder for zero. Anyhow, we cannot escape the conclusion that BV’s “nullae” as much as DE’s “nulla” was only ‘nothing’ (either as such or as something like a placeholder for zero), and that they were far from the number zero itself. So it is quite premature to “render false” the current opinion that in medieval Western Europe one had to wait as late as the second millennium before one got dispose of the number zero. In early medieval Europe the times were not yet ripe for the coming into being of the number zero. The birth of the number zero was a maturing process, and its dissemination across Asia (beginning around the year 600) as well as the one across Europe (beginning only in the twelfth century) took centuries.

Maybe DE’s “nulla” and BV’s “nullae” are precursors of the number zero, but there is no “cited author” claiming they are the number zero itself. FW, being conscious of the relativity of the meaning of the term ‘zero’ into which she BV’s “nullae” translated, never claimed seriously something like an early medieval number zero (to my knowledge). So first of all we have to tone down the term “number zero” in the paragraph in question in Wiki item [Dionysius Exiguus]. To begin with, I would propose, trying to leave intact as much as possible in that paragraph, 1 to replace in that paragraph “Latin medieval writer to use the number zero” with “medieval Latin writer to use a precursor of the number zero”, 2 to delete the superfluous third sentence of that paragraph, and 3 to replace the last sentence of that paragraph with “Both precursors of zero continued to be used by (among others) Beda Venerabilis, by whose extension of Dionysius’ Easter table to a great Easter cycle all future Julian calendar dates of Easter Sunday were fixed unambiguously after all. However, in medieval Europe one had to wait as late as the second millennium before one got dispose of the number zero itself, which had come into being around the year 600 in India.”.

As far as the Wiki item [Number] is concerned, many vaguenesses concerning “zero” disfigure this item. There seem to be many different sorts of zero. What is understood to mean a true zero? What is the difference between a true zero and a placeholder for zero? What is the difference between a true zero and the (abstract) mathematical concept of the (real) number zero? And why would Ptolemy’s omicron o be more than something like a placeholder for zero? Why would DE’s “nulla” or BV’s “nullae” be a true zero? We have to distinguish between the number zero itself and all its precursors, and we have to make clear that the number zero came into being around the year 600 in India and did not reach Europe until the second millennium. Jan Z 06:40, 7 February 2007 (UTC)

Jan, using Wikipedia articles as a source is a particularly silly thing to do. At any rate, the article Dionysius Exiguus speaks of a "common assumption", not a view commonly held by historians of mathematics who have actually bothered to think about the difference between the number and the digit. I did read chapter 5 of Struik before making my comments above - perhaps you should read them again. Struik does not mention zero in any form anywhere in the sections you mention, and the closest he gets to the number zero (as opposed to the digit) in the whole book is the referene to śūnya that I mentioned earlier. To take the things he does say about mathematics in Western Europe at the time to imply the nonexistence of zero is a massive jump that needs to be supported by some other citation.
Having said that, I do agree with you that translating somethign as zero is not really saying that they used "a zero". When it comes down to it, a lot of the problem is that you insist that quite a high level of abstraction is necessary to call something a number zero, and Joe for one doesn't agree. Whether your opinion is valid or not (let alone whether the opposite opinion is invalid), you haven't provided any other sources to back up this opinion. As for the text in this article, it quite clearly talks of placeholders as things equivalent to or precursors of our digit zero in a positional numeral system. The Greek omicron was clearly something different to this, whether you think it is fair to call it the number zero or not. JPD (talk) 13:21, 7 February 2007 (UTC)

Additional information concerning JK’s remark concerning the use of the abbreviation bo for the Ge’ez or Amharic ‘albo: it is Neugebauer himself who put in perspective his speaking of “zero”, for on p. 232 of his “Ethiopic Astronomy and Computus” he notes that “Whenever the number (sic) zero occurs in tables it is written as a word “nothing” (‘albo). No special symbol (as in Greek and Arabic) seems to be known.”; moreover this quotation implies that there is no trace of omicron o. Jan Z 10:18, 10 February 2007 (UTC)

Additional information concerning Struik’s “zero”: In his “a Concise history of Mathematics” Struik translates the Indian digit (namely a digit in a position system of numerals) ‘sunya’ into ‘zero’ relating it to the Greek ‘kenos’ (which means ‘the empty’); clearly this ‘zero’ must be interpreted as no more than a placeholder for zero. Moreover in his (Dutch-language) booklet “Tellen zonder en met cijfers” (which means “Counting without and with digits”) Struik translates ‘sunya’ directly into ‘het lege’ (which means ‘the empty’). In the same booklet he translates the Latin ‘nulla figura’ into ‘geen getalteken’ (which means ‘no number symbol’) or ‘stoplap’ (which means something like ‘stopper’ or ‘plug’). Jan Z 10:05, 19 February 2007 (UTC)

Yes, as I said, Struik's Concise History only ever discusses zero as a digit in a positional numeral system and does not address the question of the number zero at all. The only time there is any doubt about this is this example, which you have clearly shown is purely about digits, backing up my point. To claim that he makes it clear the number was unknown is just plain sloppy. JPD (talk) 12:09, 19 February 2007 (UTC)

Yes, but in Struik’s view ‘sunya’ is not only a digit, but also no more than a ‘stoplap’, only a ‘stoplap’ and no number in its mathematical meaning. It is his choice for the somewhat disparaging word ‘stoplap’ that expresses that in his view it is no number. Jan Z 19:04, 19 February 2007 (UTC)

I would invite Joe Kress (after his last contribution of 17-1-2007) to take cognizance of the new contributions of JPD and of me to the discussion concerning Wiki item [Number] and the ones to the discussion concerning Wiki item [Dionysius Exiguus] and to react to them. But I would ask him and JPD and other ones to react to the discussion concerning Wiki item [Dionysius Exiguus], specially to my proposals to improve Wiki item [Dionysius Exiguus], at Talk:Dionysius_Exiguus (the only right place for this discussion after all), at which I would continue this discussion. Jan Z 19:04, 19 February 2007 (UTC)

Additional information: concerning DE's epacts Georges Declercq (being familiar with FW's translation of “De Temporum Ratione”) says (literally) in his "Anno Domini" (on p. 103) that they are "a number from 'nil' (the number 0, of course, being then still unknown) to 28". Jan Z 10:51, 25 February 2007 (UTC)

So Struik doesn't mention the number zero in medieval Europe at all, whatever he says about India. Declercq says that 'nil' was a number but 0 was unknown. Are you trying to convince us that zero was unknown or that Declercq was writing nonsense? JPD (talk) 11:33, 26 February 2007 (UTC)

I tried to convince you that 1 the use of the term ‘number zero’ in the context of an early medieval Europe is quite premature, and 2 the number zero was unknown in early medieval Europe. For what concerns the first point: I succeeded (see my contribution of 7-2-2007). For what concerns the second point: I can only repeat my argument that nobody has demonstrated that the number zero was known in early medieval Europe. My contributions after 7-2-2007 to Talk:Number (only additional information) were only intended to help create clarity to make it possible to improve Wiki item [Number]. Yes, Struik doesn't mention the number zero in early medieval Europe at all, just because the common assumption that the number zero was unknown in early medieval Europe. In his "a number from 'nil' (the number 0, of course, being then still unknown) to 28" Declercq (not writing nonsense) is loosely speaking. He must have meant "a “number” from 'nil' (the number 0, of course, being then still unknown) to 28" (mind the quotation marks), or do you think that we must distinguish between ‘number zero’ and ‘number 0’ (if so tell me what is the difference). Within the scope of improvement of Wiki item [Number] I would repeat my question in what respect Ptolemy’s omicron o would differ from any other placeholder for zero. Jan Z 11:00, 27 February 2007 (UTC)

I am not sure why you think you are the judge of when you have succeeding in convincing us. Of course I am not distinguishing the number zero and the number 0. Declercq does, loosely speaking or otherwise, and so has written nonsense. It is true there is no particular need for him to write carefully about it in that context, but the fact that it is so garbled is a sign that he is not seriously addressing the question of the number zero. Struik also, does not at all touch on the issue - not because it's a common asssumption, but because he doesn't go anywhere near the topic. There hasn't been any clear source for the assertion that the number zero was known in early medieval Europe, but you have not given a serious source for the notion that it is premature, either! (We are not interested in "common assumptions" which are often wrong, only reasoned opinions of people who have looked into the history of mathematics. The article is quite clear on what the difference between Ptolemy's omicron and earlier placeholders is. Other than issues of wording (true zero, etc.), do you think the article is inaccurate in saying that Ptolemy used a numeral to represent zero or nothing, not just as a digit? —The preceding unsigned comment was added by JPD (talkcontribs) 12:08, 27 February 2007 (UTC).

We are only interested in reasoned opinions of people who have looked into the history of mathematics? Even when these opinions are badly reasoned? Let us try to get clarity. We have to distinguish between digit zero, number zero, placeholder for the number zero. Loosely speaking they are all zeroes, I agree. But digit zero and placeholder for the number zero are no number zero yet. Concerning Ptolemy's omicron the article says “Because it was used alone, not as just a placeholder, this Hellenistic zero was the first documented use of a true zero in the Old World.”. What is meant with “it was used alone”? And what is meant with “a true zero”? Analyzing Ptolemy’s table in Neugebauer’s “the Exact Sciences in Antiquity” (p.10) we realize that Ptolemy’s omicron is quite a placeholder for the number zero indeed, but it is neither a digit zero nor the number zero. It is a ‘zero’, okay. But a ‘true zero’? The undefined term ‘a true zero’ suggests unintentionally a digit zero or the number zero, which we have to prevent. Jan Z 19:13, 28 February 2007 (UTC)

Obviously I meant well reasoned opinions, but even badly reasoned published opinions are more suitable for Wikipedia than "common assumptions". As I have said before, when anyone other than you says "placeholder", they mean the digit zero. While the article may use sloppy language, it is clear to anyone reading it sensibly rather than simply trying to argue with it that the intended meaning is that his symbols were used where we would now think of the number zero, not simply as a digit in a non-zero numeral. I have no idea whether this is true or not, and your response doesn't shed any light on the matter. Could you explain how this is or isn't true, and then we can worry about the wording. "True zero" as implicitly defined in the article definitely does not imply a digit, and whether we have to prevent the suggestion that it is a number is exactly what is disputed, but either way, we shoudl make it clear what Ptolemy did do, not simply rely on words such as "true zero" or "placeholder". JPD (talk) 10:07, 1 March 2007 (UTC)

I’m sorry, I made a mistake, for of course Ptolemy’s omicron o is a digit zero. What the article needs is an example by which is made clear that what is called (loosely speaking) a “zero” can be a digit zero (such as ‘sunja’, namely in the Indian decimal positional system) as well as a placeholder for (the number) zero (such as DE’s “nulla”, namely related to Roman numerals such as xxxi = 31), digit zero and placeholder for (the number) zero being different things. Such an example we can find in Ptolemy’s own “zero” and in Ptolemy’s omicron o (which symbol was different from Ptolemy’s own “zero”), for both of them were a digit zero, namely in the Babylonian sexagesimal positional system, as well as a placeholder for (the number) zero, namely related to Greek numerals such as λα = 31. DE’s “nulla” is an example of a placeholder of (the number) zero which is no digit, and ‘sunja’ is an example of a digit zero which is no placeholder for (the number) zero. Jan Z 11:14, 4 March 2007 (UTC)

For the sake of clarity I would prefer to replace the confusing term ‘placeholder for zero’ with the term ‘precursor of the number zero’. Jan Z 18:02, 31 March 2007 (UTC)

On second thoughts in my contribution of 4-3-2007 the word ‘digit’ must be replaced with the word ‘numeral’. Jan Z 14:57, 19 April 2007 (UTC)

Before History of Zero

I remember that I read a long time ago that primitive cultures didn't have a name for 3 or more, equating anything above 2 with infinite. Also, IIRC in Greece the first number was 2, and 1 was not considered a number (it's not necessary in language). Does anyone have a source on this? This should be History of Three and History of One :-) Albmont 16:44, 4 January 2007 (UTC)

It is true that some languages did/do run out of names for numbers quite early. I'm not exactly sure of how this relates to the article, as the concept of numbers exists, the extension of the concept to larger and larger values is not more significant at one stage than another. The article already refers to the Greeks questioning of 1, but doesn't that have more to do with philosophy than language? JPD (talk) 17:34, 4 January 2007 (UTC)

Inconsitent Information about 'Hippasus of Metapontum'

in the article section about the History of irrational numbers, the article mentions that Hippasus was killed by Pythagoras himself (...However Pythagoras believed in the absoluteness of numbers, and could not accept the existence of irrational numbers. He could not disprove their existence through logic, but his beliefs would not accept the existence of irrational numbers and so he sentenced Hippasus to death by drowning.)

however, in the wiki about Hippasus of Metapontum, it is mentioned otherwise. he is supposed to have been killed by fellow pythgoreans.

please verify and correct —The preceding unsigned comment was added by 59.94.112.63 (talk) 19:20, 26 March 2007 (UTC).

First, this is not inconsistant, since the person who passes sentence is not the person who carries that sentence out. Second, it is a story, and there is no particular reason to think that it is a true story. I'll do some reserch and make the facts clear in both places. Rick Norwood 19:44, 26 March 2007 (UTC)

Kettenbruchdeterminanten

What is this? Looks like a bad German translation. Literally it means "determinant of continued fraction", but I have never heard about such a thing. pl:Wikipedysta:Olaf —The preceding unsigned comment was added by 83.5.253.53 (talk) 07:15, 9 April 2007 (UTC).

Complex numbers are not an arbitrary extension of the reals

I remember reading years ago in a maths textbook about the general formulae that solve (I think) quartic polynomials in the real number domain. Unfortunately I cannot find the book. The point was that these equations contain square roots and for certain types of polynomials as you perform the calculations a subtraction occurs between the square root of a negative number and the square root of the same negative number. The results are real numbers, but two complex numbers briefly appear while applying the formula. I know this sounds very vague but maybe it could be pertinent. Anybody who knows better please consider adding it to the article. 151.20.157.195 11:09, 12 April 2007 (UTC)

There are a number of example where computations involving complex numbers lead to real answers, but more important, the complex numbers are a splitting field for every polynomial with real (or complex) coefficients. That is, every polynomial factors into linear factors over the complex numbers. Rick Norwood 12:45, 12 April 2007 (UTC)

neutrality flag

If there is a serious point behind this flag, please explain it. Rick Norwood 13:01, 16 April 2007 (UTC)

I got plenty of nothing.

Jan Z seems to have a substantial knowledge of the subject of zero, but I have trouble understanding what he has written. In case anybody else is having similar problems:

The ancients loved to discuss what is or isn't a number. For some, one wasn't a number -- the numbers began with two. For others, the square root of two wasn't a number, because it was geometric rather than algebraic.

We now understand that definitions are arbitrary, to some extent, and a number is anything we say is a number. Today, zero is a number. It stands for the quantity "none at all". (Resist temptation to quote Hitchhiker's Guide to the Galaxy.)

Zero is also a digit and is used, in decimal notation, as a placeholder, as in 208, which is read "two hundred eight".

So, there are really three things under consideration here. Of course, every people have a way of saying "none at all", so that is moot. The three questions are: Did ancient people have a symbol (as opposed to a word) for "none at all"? Did ancient people consider this symbol a "number", like 2, or something else? Did ancient people ever use a symbol, either the same symbol as their 0 or a different symbol, as a placeholder.

Jan Z seems to be saying that they did, but his examples don't have placeholders in them, so it is not clear to me what he is saying.

My understanding is that the Babylonians, at least, wrote (using I for an upright wedge and > for a sideways wedge) II>>>I for two sixties, three tens, and one = 151, but that they wrote II I for two sixties and one = 121, with a space rather than a placeholder, and also that II could be 2 or 120 depending on the context. Of course, they may have done one thing at one time and another thing at another time, they had a long history, and most of their tablets have never been transcribed, much less translated.

Does anyone have more information about this?

Rick Norwood 20:02, 19 April 2007 (UTC)

Types of numbers

Typographically, this section is a little bit of a mess. Is there value in unifying the presentation of the numbers and sets of numbers to all be in the usual typeface (for example, −1234.56 instead of and N instead of ? Xantharius 18:18, 4 June 2007 (UTC)

Unless there are objections I shall proceed with the above plan. Xantharius 17:55, 5 June 2007 (UTC)
After a link to Blackboard bold appeared on the page, I discovered that it was possible to have the Unicode symbols for the most popular number systems instead of the PNG-style characters: thus ℚ instead of . I think the Unicode ones are far nicer. Thoughts? Xantharius 16:59, 11 August 2007 (UTC)

Looks good to me in the article. The blackboard bold Unicode Q doesn't show up very well in the paragraph above. Rick Norwood 18:02, 11 August 2007 (UTC)

It was just that the inline use of the PNG characters disrupts the line spacing: the Unicode ones don't. They also don't stand out as much (actually preferable, in my opinion!). Xantharius 20:08, 11 August 2007 (UTC)

But what _is_ a number?

"... an abstract idea used in counting and measuring" has no value as a definition, especially as the concept is not further visited. There are plenty of abstract ideas used in counting and measuring -- operators, equality, order -- the "definition" seems to hope that the reader already knows a number when she sees it.


I would propose the following definition: a numeric system is a monoid whose operator preserves a prescribed total order, and a number is an element of a numeric system, or more specifically one of the canonical numeric systems elevated by mathematicians over the centuries. This highlights the essential interface that we have had with numbers from pre-history: we can add them to each other and we can compare them against one another. Of course, in many numeric systems we can do more, but it seems the core that places numbers so fundamentally at the heart of our understanding of the world.

The "flaw" with this definition, of course, is that it excludes the complex field which has no inherent total order. I'm not willing to undertake WP:BOLD without feedback because it may be a heretical notion, but I don't believe this to actually be a flaw: I don't see that the elements of a complex field are much different than a matrix of numbers or a polynomial: an extension of a numeric system that uses the underlying numbers to form rich algebraic structures.

So shall I take a stab at editing the page, or is this a non-starter and people actually like numbers being an abstract idea used in counting and measuring? MatthewDaly 02:44, 6 November 2007 (UTC)

I'm afraid we're not permitted to make up our own definitions in WP articles. Doesn't matter whether they're good or not, so I won't address your proposal on the merits. Please review WP:NOR. --Trovatore 03:04, 6 November 2007 (UTC)
I am unclear on the scope of your rejection. The introduction is completely unsourced, so someone seems to have made up the "abstract idea" definition. It is hardly original research to observe that the concept of numbers historically have been about computability and order, as these are the whole of the core of numerical structures of Peano, Dedekind, and Conway, whose work I would intend to both leverage and reference were I to help on this page. I can appreciate that there is a lack of unanimity among mathematicians when it comes to the "numberness" of mathematical structures that are generated from the real numbers but have increasingly pathological behavior (complex numbers and polynomials are not ordered, matrices and quaternions don't have commutative multiplications, etc.), but the current alternative of devising a definition that is so vague as to not clearly indicate things that are universally regarded as not numbers strikes me as a poor one.MatthewDaly 05:14, 6 November 2007 (UTC)
Well, I was indeed rejecting your proposal (as it relates to the article), but I was not specifically defending the current text (which actually I hadn't read recently). There is no accepted general definition of "number", and the article certainly should not give the impression that there is.
However, now that I take a brief glance at the existing text, I don't see anything terribly wrong with it. I see the assertion "[a] number is an abstract idea used in counting and measuring" as being a descriptive assertion rather than a definition, and one that should be pretty uncontroversial -- that is, we all agree that the abstract idea of number is used in counting and measuring, whether or not we think that this usage precisely isolates what it means to be a number. Maybe you'd like to propose some text that makes more explicit that the sentence is not a definition?
Now that leaves open the question of whether the article should discuss definitions (necessarily plural, I think) that have been proposed for the notion of "number" in general. I don't think it's terribly necessary -- and I certainly wouldn't put it in the lead section -- but it might make an interesting sidelight somewhere in the body of the article. But any such definitions need to be sourced, and it should not be implied that any of them has general acceptance, because I think that none of them does. --Trovatore 06:35, 6 November 2007 (UTC)
Has MatthewDaly been reading Mathematics Made Difficult (ISBN 0-7234-0415-1)? That definition looks as if it was from there. — Arthur Rubin | (talk) 14:24, 6 November 2007 (UTC)
I have not, I'll have to look it up. It seems to me exactly the definition that any formalist would devise; I wonder why they do not seem to have made a point of doing so. Perhaps it doesn't have the same utility as we got from axiomatizing the previously abstract notions of "set" and "proof", but at least it would allow students to understand why some mathematical objects are universally understood to be numbers while others are universally excluded. Ah well, I am disappointed but sanguine.MatthewDaly 17:41, 7 November 2007 (UTC)

A number system doesn't have to be a monoid. If you define a number system to be a type of monoid, then the Octonions are not a number system, as they are nonassociative. Willow1729 (talk) 23:31, 21 September 2008 (UTC)

In Euclidean (plane) Geometry a straight line always has a large number of point locations. Then, if you buy a location point of the number zero and the number 1, you have a method of creating an ordered set of integer numbered distances on the line, which can increase up to but not include a value for infinity. Then there are other categories of distances on the number line, like the rational number (m/n) values, and the square root of diagonals values as developed by the Pythagorean theorem. And since these square root values have real distance locations on the number line, they are still considered to be real numbers, but as having an irrational name. And finally there are number names like Pi that dont have a determinal point location on the straight number line, and also point locations with neither a name nor a determinal location. This is all part of the rudiments of mathematical development whuch most people have forgotten or never been exposed to. WFPM (talk) 20:48, 9 November 2008 (UTC)
I donr think that the number Pi can be considered to be a real number since it's exact value connot be located on the number line.WFPM (talk) 21:59, 9 November 2008 (UTC)
Whether this is a suggestion as to improvement of the article rather than commentaty about the subject (which should be summarily deleted), it's nonsense. Not even intuitionists and constructivists believe that π is not a real number. And finitists don't believe there is such a thing as a real number.... — Arthur Rubin (talk) 22:15, 9 November 2008 (UTC)
So now, in addition to all the distances on the number line, you want to creare a category of real numbers whose exact location on the number line cannot be located. However I.ve got to admit that you could come pretty close by rolling a circle as per the article. But that' s not plane geometry as defined by Euclid. But a Point on the number line isn't defined by an approximate location, and I thought that real numbers were. But I'm not a nit picker, just a definition picker. WFPM (talk) 00:42, 10 November 2008 (UTC)
And I'm not an institutionist or a constructivist or a finitist. Im just an Engineer interested in science.WFPM (talk) 00:47, 10 November 2008 (UTC)
PS Following Mathematical logic, we can agree that if we knew the real value of Pi it would be a real number. But since we only have an approximate albeit pretty accurate estimate, we cant say that our estimate represents the real value of Pi. WFPM (talk) 00:53, 10 November 2008 (UTC)


I would say that any attempt to define a number would be OR. Numbers are those things we have decided to call numbers. We have good definitions of particular types of numbers: Natural numbers, real numbers etc. but not of the word number, which is more a lingustic device rather than a mathematical definition. --Salix (talk): 01:50, 10 November 2008 (UTC)

Yes but besides a location of a point or a magnitude of a number, we have the definition of things. And a point doesn't have a dimension, just a unique location. And a number doesn't have any other significance other than being the distance involved with a point on the ordered number line. And if you cant find the point, the number becomes indeterminate. WFPM (talk) 02:28, 10 November 2008 (UTC)
If it's just your opinion, even if justified, it has no place in a Wikipedia article, unless you can find some reliable source which reports that π is not a real number. Please see original research. I've never seen anything like that in print or on the web. — Arthur Rubin (talk) 07:49, 10 November 2008 (UTC)
I guess you're right. The real number Pi would be a real number. But what do you do if you cant find it's location on the number line? And where else are you going to find out it's true value? And I use the Pi and epsilon values often because they're in my Casio, and I need them as mathematical tools for approximate calculations. And I even call up a random number occasionally, even though I know that it is not really a random number. But I would think that if you are going to have an article about numbers you would start out by defining what a number really is. And the only think that I can think of that it really is is a distance from zero to a point on the number line.WFPM (talk) 15:46, 10 November 2008 (UTC)
I notice that Wikipedia also has an article on Ratio, which also involves mathematical quantities. And I'm trying to think of a ratio that has a quantity that couldn't be determined as being a real number and thus a point on the number line. But I cant think of any.WFPM (talk) 16:31, 10 November 2008 (UTC)

To find pi on the number line, make a circular disk whose diameter is equal to the distance from zero to one on the number line. Make a mark on the edge of the disk and put that point on the disk on zero. Roll the disk to the right, without slipping, and when the mark comes down and touches the number line, that point is pi.

A number can have a measurement attached, 12 eggs or 4 meters. The measurement is called a "dimension". A ratio is a number without a measurement. Pi is a ratio -- whether we measure a circle using inches or meters, it doesn't matter, pi always comes out the same. Ratios are calculated using division. Pi, for example, is circumfrance divided by diameter. Sometimes raitos are written as fractions, 5/4, and sometimes with a colon 5:4 (read the ratio of five to four. The rule for equal ratios is, two ratios are equal if the product of their means equals the product of their extremes. Thus a:b = c:d if and only if bc = ad.Rick Norwood (talk) 18:33, 10 November 2008 (UTC)

All numbers are number without measurement. Measurements are just mathematical applications of the set of numbers. Except for a conceived ordered number line, which is supposed to include all numbers together with their appropriate distance locations from zero on the number line. And I'll agree that Pi is a ratio quantity, since you can't determine it mathematically. But the number line should be considered as being made up of an infinity of points, and therefore infinitely divisable. But all of the points are not quantitatively determinable. In fact it is said that there are an infinity of numbers on the number line within the range of zero to 1. WFPM (talk) 19:02, 10 November 2008 (UTC)
And I agree with you that Euclid should have been smart enough to have used a circular piece of paper to measure out Pi distance on the number line.WFPM (talk) 03:29, 19 November 2008 (UTC)
Ok, I'll stick my neck out and claim that π can be determined mathematically. By the Leibniz formula for pi it is four times 1/1 - 1/3 + 1/5 - 1/7 + ... Given any rational number, finitely many terms of this sequence suffice to decide whether that number is smaller or larger than π (it can never be equal to π because π is irrational). Therefore π cuts the rational line at a well-defined point, and this according to Dedekind is what it means to be a well-defined real number. If you don't agree your quarrel is with Dedekind, not me. --Vaughan Pratt (talk) 19:48, 3 January 2009 (UTC)

A number is

A number is a word(concept) that represents and contains a sequence of patterns or data. One word, equals one object.

For instance the numeral one, represents the mental object of one, as well as the geometric-symbol of one.

The numeral one could be considered the first letter of the mathematical alphabet. I think it's best to think of mathematics as a language that describes shapes and patterns. A number would be considered both data and a shape at the same time, a data-shape. In fact all numerals are geometric shapes. -- Yours truly BeExcellent2every1 (talk) 12:24, 21 November 2007 (UTC)

This is original research and not suitable for Wikipedia. Rick Norwood (talk) 15:30, 19 November 2007 (UTC)

Catalan deffinition of number

I suggest the definition given by Pompeu Fabra in his Dictionary of Catalan Language.

It can be translated into English like:

A number is the concept that arises from counting things which form a collection, or a generalization of this concept.

This definition has several advantages:

  • It is clear for everybody
  • Directly relates numbers with intuitive groundings of set theory (collection).
  • Includes all kinds of numbers, because all of them can be considered generalizations of natural numbers (those which arise from counting).
  • Excludes all things that are not considered numbers.
  • It is not original research. It is the definition given by an expert both on mathematics and on language. There is a clear bibliographic reference.

I don’t know if it is the best solution for the English Wikipedia. In Catalan there are three completely different words to express: a) "nombre", the abstract concept of number (the definition I am suggesting), b) "número", the representation of the number in a numeration system, and c) "xifra" the symbols used to represent the numbers. I fear this is not the same in English, but from a mathematical point of view when we talk about numbers we think on the abstract concept. --62.57.139.143 (talk) 12:27, 1 January 2008 (UTC)

It seems very similar to the current opening sentence, except that we refer to measurement as well as counting. I think this is a good thing, as measurement is significant in its own right, not just a generalisation of counting. JPD (talk) 11:25, 2 January 2008 (UTC)
As it has been said before: “There are plenty of abstract ideas used in counting and measuring”.
A lot of confusion comes from involving the measuring process in defining number.
Measuring can be reduced to counting how many times the units of measure are contained in the magnitude to be measured.
But in the process of measuring, several problems have to be solved:
  • How the units of measurement are combined to generate a higher magnitude? i.e. the units of length have to be put contiguously to other units of length on a straight line, it is forbidden to overlap, left gaps and put them on a curved line.
  • How to compare two magnitudes? You have to describe the experiment used to compare the weight to be measured against the units of measure using a mechanical device.
  • How to divide a magnitude in equally valued parts? How to divide the unit of measure to get factionary units?
They are mainly related to the physic properties of the magnitude to be measured.
Measuring can be related to the invention of rational numbers. This is because of the need of dividing the measuring unit in equally valued parts. This could be avoided if the unit of measure is small enough. It also can be related to invention of real numbers if it is admitted that magnitude can be divided in infinitesimally small parts (which is not clear from practical and even theoretical point of view when considering real magnitudes). But they can be introduced simply as generalizations of natural numbers.
But, the only new concept that arises from counting is the concept of natural number. Counting is establishing a bijection between the set being counted and a set of new entities, the set of natural numbers, the only meaning of its elements is that that have in common all the sets that give the same outcome when being counted. That’s why I think it is better to use “The concept that arises from counting” instead of “an abstract idea used in counting”.
I think that making reference to measuring has the intention of opening doors to generalizations of natural numbers, but there are other kinds of numbers that cannot be used in measuring. So I think it is preferable to use directly the expression “or a generalization of this concept”.--147.83.48.87 (talk) 18:17, 2 January 2008 (UTC)

I really wish people would drop this useless effort to find a general definition of "number" in the context of this article. You're not going to find it, because there's no such thing -- "number" is a word applied by divers sources to differing collections of concepts, with some commonalities, but no clear demarcation between what belongs and what doesn't. What the article needs to do is simply present the various things that some reasonable fraction of the literature takes to be "numbers", without trying to make them tidier than they actually are. Even the current first sentence overreaches in that direction (how, for example, are octonions "used in counting and measuring"?).

I think the lead sentence should not start a number is... at all, because that formula almost promises that we're going to give a definition, and we can't. A better lead might begin something like

In mathematics, the notion of number is used in various ways, including abstractions used to count objects and measure quantities

Needs polishing, but you can see where I'm going -- we should point in the direction of the most used senses of the word, but without straining to extract a commonality that may not be there, and most especially without giving any warrant to claim that we're excluding anything in particular from numberhood. --Trovatore (talk) 05:39, 3 January 2008 (UTC) "I am not a number, I am a free man!" Rick Norwood (talk) 13:28, 17 January 2008 (UTC)

vid

Wouldn't this work as a definition?

In mathematics, a number system is an algebraic structure consisting of abstract entities called numbers. Number systems typically come with a notion of "size of numbers" defined by an order relation or a norm. Willow1729 (talk) 00:53, 22 September 2008 (UTC)

This doesn't define a number does it? It calls the elements of some set "numbers", but that's a bit self-referential isn't it? I don't think we can ever get a nice precise axiomatic definition down for the concept either - consider that whatever definition we come up with has to deal with systems that have {1, 2, many} as their set of numbers, and other oddities, including systems that aren't closed under any operation. I think at best we can give some abstract description, and then we can give precise examples later. Just my gut feeling on the matter though, I don't have any sources to back it up, or that disagree with you. Cheers, Ben (talk) 11:30, 23 September 2008 (UTC)
I don't think we need a definition of Number, indeed any such definition would be OR. We can list those things which are typically called numbers N, Z,, Q, R, C and a few others. Common properties of these are better left to those properties of algebraic structures they share, which can be stated precisely. --Salix alba (talk) 14:36, 23 September 2008 (UTC)
I know there are lots of difficulties in defining number systems, but I still think it's feasible. I looked through several Wikipedia articles on numbers, and it doesn't look too hard to characterize them mathematically.
First of all, numbers are things that you can manipulate algebraically. It's true that there are sets of numbers that aren't closed under multiplication or addition; for example, the irrational numbers are not closed under these operations. So, if you want to call the irrational numbers a "number system" I guess you should define a number system to be an algebraic structure as I described above or any subset of such a structure.
Second, I don't think you have to worry about things like {1, 2, many} because these don't have articles on Wikipedia. The purpose of defining a number is to provide an abstract characterization of things like N, Z, Q, R, etc.
The point I was trying to make with my definition is that numbers are abstract mathematical entities with algebraic structure. Of course a number system isn't just any algebraic structure; numbers are usually characterized by the fact that there is a notion of "size of numbers". This is true of every number system I can think of the except the sedenions. This system does not have a norm, so it doesn't make sense to talk about the size of a sedenion. However, I don't think this presents any real problem. The sedenions are more of an abstract algebraic structure than a number system. Willow1729 (talk) 23:59, 23 September 2008 (UTC)
I think you're sort of missing the point here. The question is not whether you can come up with a good definition — let's assume for the sake of argument that you can. You still can't put it here, because it would be original research. Even if you can find a sourced definition in the literature, and even if it's a good one, you still can't use it in the opening paragraph and say it's what a number is, because it's not the standard definition in the literature (that's obvious, because there isn't one). (In the second case it might be OK to mention it later in the article.) --Trovatore (talk) 00:30, 24 September 2008 (UTC)

Definition revisited

An editor has been adding:

A number is the symbolic representation or "name" of a quantity.

I don't think this is exactly correct. A number is abstract, not symbolic. Nor does it even apply to complex numbers. — Arthur Rubin (talk) 17:41, 5 June 2009 (UTC)

Hi Arthur. I think a number is the notation or writting of a quantity. At least, I think such notion should be reflected on the article as a secondary or complementary definition, if you prefer this so. Another possibility, of course, is not to include anything at all about the idea of number as quantity notation, but in this case we wouldn't have a good article. I know in Maths there are many abstractions and generalizations to consider (such as complex numbers, as you mention) that can make the question complex, specially concerning terminological uses. However the concept of number as the notational representation of quantity is the best description I can find according to Mathematics, and I honestly find it at least recommendable to reflect this idea on the article. Bests.


(Also, check out Quantity talk page)--Faustnh (talk) 18:57, 5 June 2009 (UTC)
Technically, from the point of view of mathematical logic, the symbolic representation is not a number, but a numeral. (This is a slight generalization of the term numeral from the everyday use, which is more or less synonymous with digit.)
But the main important point here is that the article absolutely must not attempt to define the notion of "number", because there simply is no single accepted definition. We've been over this bunches of times. No "definition" of number will ever be acceptable by WP standards; it will always be original research, undue weight, or some combination. --Trovatore (talk) 20:10, 5 June 2009 (UTC)
(ec)
I think, perhaps, you (Faustnh) are interpreting a number as represented in a numeral system or as a number name. For instance, the number twelve can be written as "12", "C"x, "11002", etc. I don't think any of those is the "number" (although that's a separate philosphical dispute). In my opinion, all of those strings are names of the same number.
Your preferred definition also has problems with real numbers, and really has problems with complex numbers, as there's no order.
Arthur Rubin (talk) 20:25, 5 June 2009 (UTC)


I'd rather say all of those strings are names of the same quantity.
Anyway, I suggest including in the article the definition of number, as "symbolic representation of quantity", as a restricted primary definition (or a particular, non general, definition). If you still estimate it's not acceptable, then I leave it to your consideration. Bests. --Faustnh (talk) 20:36, 5 June 2009 (UTC)
(PD: in the case of complex numbers, I'd rather say there's no uni-linear order, or I'd even say there isn't still a satisfactory proof about the absolute possibility of an uni-linear ordering). --Faustnh (talk) 20:44, 5 June 2009 (UTC)
I do not believe there is any general definition of "number" in mathematics that encompasses the traditional number systems (N,Z,Q,R,C), ordinal and cardinal numbers, hyperreal and surreal numbers, quaternions and octonians, and all other things that are called numbers. To define a number as a "quantity" seems circular. However, a number is certainly not a representation; that is a numeral. — Carl (CBM · talk) 04:27, 6 June 2009 (UTC)

figure

are there numbers that are real but not rational or irrational? 79.101.174.192 (talk) 17:55, 20 June 2009 (UTC)

ups! complex numbers, of course! :)79.101.174.192 (talk) 17:56, 20 June 2009 (UTC)

ups again! complex are not real, so my above question remains! :)) 79.101.174.192 (talk) 17:57, 20 June 2009 (UTC)

i guess the number may be transcendental numbers, but those are missing in the figure. or not? 79.101.174.192 (talk) 17:59, 20 June 2009 (UTC)

The answer to your first question is "no". If a real number is not rational then, by definition, it is irrational. The diagram is easily misread and somewhat misleading - see the thread "Rational/irrational" above. As for the relationship between transcendental numbers and irrational numbers:
  1. A rational number a/b is a solution of the equation bx = a.
  2. Therefore all rational numbers are algebraic numbers.
  3. Therefore all real transcendental numbers are irrational numbers. Gandalf61 (talk) 16:53, 21 June 2009 (UTC)
thanks! that thread indeed answers my confusion! plus, figure can be fixed by putting 'irrational' in complement part, and 'transcendental' where currently irrational is. 79.101.174.192 (talk) 18:22, 21 June 2009 (UTC)

Removal

The real numbers contain the irrational, rational, integers and natural numbers, and transcendental numbers.
The figure "File:REAL NUMBERS.svg|thumb|300 px|upright|The real numbers contain the irrational, rational, integers and natural numbers, and transcendental numbers." has been removed by 202.36.179.66 [7], on the grounds that it suggests that: The image implies there are real numbers that are neither rational or irrational. Could also mislead one about the relative "sizes" of these sets.
I agree that the figure risks misleading, although it comes down to how one interprets the figure; the caption gives no clue, such as saying that it is a Venn diagram. — Charles Stewart (talk) 07:57, 20 July 2009 (UTC)

History

"speculated that the first known" Did you really write that? —Preceding unsigned comment added by 64.85.211.7 (talk) 18:55, 11 August 2009 (UTC)

GA Review

This review is transcluded from Talk:Number/GA1. The edit link for this section can be used to add comments to the review.

I come to this as an intelligent but ignorant reader, and it is my habit to comment on the article as I read it the first time from printed copy.

I'll thus expand this review over time. It may take a few days to review it fully. I'll make what I consider minor, uncontroversial copy edits, but feel free to revert them. Other suggestions for copy edits I'll list here.

Reviewer: Si Trew (talk) 11:38, 19 July 2010 (UTC)

GA review (see here for criteria)
  1. It is reasonably well written.
    a (prose): Good b (MoS): Good
    I see no comments on the Talk page about making particular exceptions to MoS typography here, so am going with MoS (particularly WP:MOSNUM) unless it is absurd to do so. For example, minus signs should use &minus;, and fractions should be "of the fraction form" (although it does not define what that is, suggesting only that {{frac}} is available).
    Done I've made a number of changes for MOS compliance, see the edit summaries.
  2. It is factually accurate and verifiable.
    a (references): Fail b (citations to reliable sources): Fail (OR): Good
    Very few inline citations. I realise this is a general-purpose article and not a deep mathematical article, but when calling out particular theorems or particular mathematicians, it should be referenced better; I've marked a couple of things in particular as {{cn}}, but I think that really every time a formula etc is attributed or a year of discovery mentioned, there should be an inline reference.
    I also note that in the later parts of the history we suddenly start getting date-style inline references (1790) for example, yet no reference for what that refers to.
  3. It is broad in its coverage.
    a (major aspects): Good b (focused): Good
    Pleasantly surprised that it covers the basics well without going into too much detail.
  4. It follows the neutral point of view policy.
    Fair representation without bias: Good
    The occasional use of flattering or subjective language ("well-known" etc) which I have removed.
  5. It is stable.
    No edit wars, etc.: Good
    Looks fine there.
  6. It is illustrated by images, where possible and appropriate.
    a (images are tagged and non-free images have fair use rationales): Neutral b (appropriate use with suitable captions): Fail
    There's not a single image in the article. I think it could do with some; the articles on rational numbers, complex numbers and so on have images.
  7. Overall:
    Pass/Fail: Fail
    Holding pending better references and a few images, please. Si Trew (talk) 13:53, 19 July 2010 (UTC)
    Unfortunately I feel I have to fail this. There's been no attempt as far as I can see to address my concerns as to citations or use of images. I would have happily given examples of what kind of images to use if there was any attempt to address them, but there has not been. Sadly, I fail on that point.
    If I am mistaken in my view that this article is intended not for the general reader but for mathematicians, this should go to WP:GAR.
    It's a pity since except the lack of images, and the references to particular laws etc which I think could have been easily fixed, this should have been an easy GA pass. I fail it with reluctance. Si Trew (talk) 16:40, 26 July 2010 (UTC)

Copy Edit

Classification of numbers

  • Good The table was headed "Numbers", after talking about "Number systems", which I've changed it to.
  • Neutral The natural numbers are "one, two, three", then they are "0, 1, 2, 3"..., I don't want this to get too technical right here, but it's an important distinction to make early on whether the natural numbers includes zero or not (i.e. it depends.)
Done. Gandalf61 (talk) 13:01, 19 July 2010 (UTC)
Unfortunately this table has been changed in [this edit] to be far more complex. I had assumed the purpose of this article was as an introduction to the concept of number for people who are not primarily mathematicians (other articles it rightly links to go into more detail); this to me just confuses things. For one thing, the para immediately above it calls them "number systems" and the whole point then is to show examples of number systems, not "counting systems" as this table now has it.
I haven't been doing GA reviews for long so please forgive me if I am out of order. I prefer to pass articles than fail them, and will work with the editors of the articles to achieve that. I just think that edit makes the article worse not better, to the point it would fail my GA review for being too obscure for the nature of the article (i.e. not focused). Si Trew (talk) 19:09, 21 July 2010 (UTC)
Done Fixed with this change. (ES: Undo good faith edit, far too large with numerous errors.)
I've left I hope a courteous note on the undone editor's talk page, and a welcome. Si Trew (talk) 20:32, 21 July 2010 (UTC)
I think we could remove that whole table. It adds nothing to the explanations in the following paragraphs, and has potential to confuse the reader. Gandalf61 (talk) 09:10, 22 July 2010 (UTC)
That's up to you, I can see the point of it as kind of an intro, and can also see the point of removing it. Either way is fine by me from the point of view of the GA review. Si Trew (talk) 13:18, 22 July 2010 (UTC)

Real numbers

  • Done The sentence starting "In abstract algebra, the real numbers are up to isomorphism uniquely characterized", this is confusing to me (I think may be a slip here).
Reworded. Gandalf61 (talk) 13:01, 19 July 2010 (UTC)

{{frac}}

  • WP:MOSMATH#Fractions strongly suggests that {{frac}} not be used. I've converted to {{frac}} to a new template {{fracText}} which meets with the textual form suggested there. If someone wants to restore {{frac}}, it should probably be discussed at the various MOSs. — Arthur Rubin (talk) 22:42, 19 July 2010 (UTC)
That's fine; I'd already commented about the vagueness at WT:MOSNUM#Fractions, but no contributions from other editors there yet. Si Trew (talk) 19:11, 21 July 2010 (UTC)

Not subsets

Strictly speaking the following statement under complex numbers is wrong:

Each of the number systems mentioned above is a proper subset of the next number system. Symbolically, .

for instance natural numbers are defined by sets containing sets, integers by pairs of naturanl numbers with an equivalence relation under subtraction, rational numbers by a pair of integers again, real numbers by a pair of sets defining a Dedekind cut, or perhaps other definitions but they are not subsets in the set sense.

However we all know what it means, has anyone got a wording that would be more accurate without being too pedantic and silly? Dmcq (talk) 12:43, 19 July 2010 (UTC)

The statement as it stands reflects major reference books. Your objection is technically correct, but we say "A is a subset of B" even when what we really mean is "There is a subset of B isomorphic to A." The isomorphism is understood. This situation is similar to saying "1 + 1 = 2" instead of saying "the number represented by the numeral 1 added to the number represented by the numeral 1 equals the number represented by the numeral 2." Rick Norwood (talk) 13:01, 19 July 2010 (UTC)
Actually, by "A is a subset of B" we mean that "The canonical map from A to B is 1-1,", rather than "There is a subset of B isomorphic to A." Still, the mathematical convention should stand. — Arthur Rubin (talk) 22:38, 19 July 2010 (UTC)
It is a point that bothers me slightly, because I do think the reals are different enough in kind from the simpler structures to justify thinking of, say, the real number zero, as a different object from the natural number zero, and this at a more fundamental level than, say, the choice of a coding via Cauchy sequences or Dedekind cuts. But I doubt there's any useful way to talk about that at the level of an article like number. --Trovatore (talk) 22:48, 19 July 2010 (UTC)
I think it would be counterproductive at number – perhaps I am mistaken, but it should be a general introduction. That is to say, almost everyone knows (or rather thinks they know) what a number is and so this article shows it is a bit more complicated than that, but it doesn't have to dot every I or cross every T, referring to other articles is more appropriate. It should, of course, be correct. Si Trew (talk) 20:59, 30 July 2010 (UTC)
I think the statement above, "we all know what it means", indicates the problem in this article about trying to decide what audience it is aiming at. Si Trew (talk) 21:01, 30 July 2010 (UTC)

Si Trew didn't say, "we all know what it means", he said, "almost everyone knows (or rather thinks they know) what a number is". It is more complicated than most people realize. For example, for centuries major mathematicians denied that a negative number was really a number. This article gives a good working definition and then mentions some of the deeper questions. Rick Norwood (talk) 12:07, 31 July 2010 (UTC)

I would like to mention two points that did not come out clearly enough in the discussion above. 1. numbers are arguably more "primitive" objects than set theory. Thus, the construction of the integers in terms of pairs of natural numbers is just that: a particular construction, which does not change the fact that natural numbers are found among the integers. 2. While there is not much argument about the status of natural, integer, and rational numbers, there is a bit of a controversy about what constitutes the reals. The intuitionist perspective is quite different from the classical one. The current state of the page does not reflect this at all. Perhaps this is appropriate for the level the page is aiming at, or perhaps at least a brief mention should be included. Tkuvho (talk) 10:08, 1 August 2010 (UTC)

I think a discussion of the difference between the intuitionists and the classical perspective would be more appropriate in the article real number. Rick Norwood (talk) 12:49, 1 August 2010 (UTC)

Negative numbers

The article deals with negative numbers in a light and naïve way. It starts talking about negative numbers within the "Integers" section, but the definition ("numbers that are less than zero") would certainly include, among infinitely many others, -1/3, -sqrt(2), and -π. Moreover, the example provided (money in a bank) is also wrong: I can withdraw $123.45 from a bank, and the number representing that withdrawal (-123.45) is certainly not an integer. The problem is not solved by just moving it out of the "Integers section" because the original intention was clearly to deal with negative integers (e.g. "when the set of negative numbers is combined with the natural numbers and zero, the result is the set of integer numbers"). In the context of the hierarchical construction of number sets, when we only have the natural numbers, it makes sense to define the negative integers, and with them and the natural numbers (including 0) get the integers. Perhaps we should try to avoid using the expresion "negative integers" since we have not defined the integers yet (we need the negative integers to do that), but saying just "negative numbers" does not help, because that expresion means a different thing (all of the negative numbers). We could talk about "the numbers which are opposite to the natural numbers" or something along those lines. And clearly, the bank example has no place in the article. El Changuito (talk) 18:58, 22 September 2010 (UTC)

I don't think that it matters that the section implicitly talks about negatives of numbers other than of the integers. You have a good point though, which I think we can indeed simply fix by replacing
"When the set of negative numbers is combined with the natural numbers and zero..."
with
"When the set of the numbers which are opposite to the natural numbers is combined with the natural numbers and zero..."
DVdm (talk) 19:30, 22 September 2010 (UTC)
That will just lead to confusing questions about what 'opposite' means. Negative integers is fine, or negative whole numbers if you really can't bear to use a name before it is defined. Dmcq (talk) 20:04, 22 September 2010 (UTC)
At the article's level of simplicy "opposite to" is just (sort of very loosely) defined. The (subtle, but pertinent) point was that the set of "negative numbers" is too large to be combined with the natural numbers. It's only the "set of negatives of the natural numbers that should be combined with the natural numbers and zero, to produce, by definition, the set of integers. As the phrase stands now, it is just plain wrong. And of course, we can't talk about negative integers before we have introduced integers, which is what we are doing just now. DVdm (talk) 20:41, 22 September 2010 (UTC)
O perfectly well understand what you are saying about the numbers. What I said and what I again repeat is that sticking in another term like opposite of does not help, it is just replacing one word 'negative' with another that isn't yet defined and which is unnecessary and confusing. And there is nothing wrong with using a term before it is formally defined and can be far better than getting oneself tied up trying to avoid the obvious. I suggested using negative whole number as a well understood term which while not defined here will get by for the moment. Dmcq (talk) 20:59, 22 September 2010 (UTC)
You mean like changing the bad phrase to
"When the set of the negative whole numbers is combined with the natural numbers and zero..." ?
I guess that would solve the original problem just as well indeed.

But perhaps we really should informally (but i.m.o. much more properly) first define the negative (or the opposite) of a natural number as something that produces zero when added to the number. Combining the set of these with the set of naturals and zero then produces the integers. DVdm (talk) 21:36, 22 September 2010 (UTC)

I went ahead and made some trivial changes. Feel free to improve. DVdm (talk) 19:36, 23 September 2010 (UTC)

Article Natural_number says zero may or may not be included in set of natural numbers.

I saw an edit that suggested it's unnecessary in this article to mention zero in distinction from natural numbers, because zero is a natural number. But that does not seem to be an invariant property of the term "natural number" in the several sources I have at hand. Many sources start the set of natural numbers at 1, and I think that this article should reflect that uncertainty of definition, as the article Natural_number does. -- WeijiBaikeBianji (talk) 04:14, 24 September 2010 (UTC)

Yes, I agree with you. I looked at that issue briefly a while back but the correct fix wasn't immediately obvious. Shouldn't be too hard to find reasonable language; go for it. --Trovatore (talk) 07:41, 24 September 2010 (UTC)
The current article already reflects this in the section Number#Natural numbers: "Traditionally, the sequence of natural numbers started with 1. However, in the 19th century, set theorists and other mathematicians started the convention of including 0 in the set of natural numbers". DVdm (talk) 14:36, 24 September 2010 (UTC)
The article did not reflect what User DVdm indicates it did. The use of the word "convention" means that it is generally agreed upon to include zero. If I take my books off the shelf, about half will include zero, and about half will not. I fixed this yesterday. The article now indicates how "Natural" may be used to describe two different sets. If you can word it better, feel free of course. Cliff (talk) 16:49, 28 March 2011 (UTC)
Automated theorem proving tends to use the definition with them starting at 1 so we might be heading back to the old ways again! 20:28, 28 March 2011 (UTC)

Irrational Section.

Currently, irrational numbers are only defined in an effort to explain Real numbers. Shouldn't we have a section titled "Irrational numbers" immediately following the section on Rational numbers? It would make more sense to try to explain what an irrational number is in it's own section rather than when trying to define Real numbers. Cliff (talk) 18:04, 14 February 2011 (UTC)

They are described in their own section. There is nothing about the description which says they were only defined in an effort to explain real numbers. Dmcq (talk) 22:13, 14 February 2011 (UTC)
The table of contents reads.
   * 1.1 Natural numbers
   * 1.2 Integers
   * 1.3 Rational numbers
   * 1.4 Real numbers
   * 1.5 Complex numbers
   * 1.6 Computable numbers
   * 1.7 Other types
   * 1.8 Specific uses
This doesn't have Irrational numbers. Is there a reason it doesn't? if not, I'll create the new section.Cliff (talk) 23:44, 14 February 2011 (UTC)
I think this is not a good idea. Irrationals are defined by exclusion — they are simply real numbers that aren't rational. They are not a new "type". --Trovatore (talk) 23:49, 14 February 2011 (UTC)
So you define real numbers before defining irrational numbers? How does that work?Cliff (talk) 00:04, 15 February 2011 (UTC)
Oh, various ways. Take a look at real number for possibly more than you want to know. Let me turn the question around on you: How exactly would you define irrational numbers, without first having defined the real numbers? --Trovatore (talk) 00:09, 15 February 2011 (UTC)
Sorry I looked at the history section where irrationals are quite properly described before going on to the reals as in the discovery of irrational numbers. However nowadays one would as Trovatore says define the reals first and then show there are reals which aren't rational. Dmcq (talk) 10:27, 15 February 2011 (UTC)

Ok. I see what you're saying. My question now is this: What is the point of this article? Is it to be a rigorous mathematics textbook? If so, then we're on the right path. But, if it is to be a reference for someone who might want to know more about numbers, but doesn't have a mathematics background (an encyclopedia), then perhaps we should consider talking about irrational numbers in a way that is approachable to the mainstream. Consider the person who would put "number" into the search bar. Do mathematicians do this (apparently you and I did)? What is our purpose in coming here, and does it fit with the "lay" person's reason for coming here?Cliff (talk) 18:49, 15 February 2011 (UTC)

It is an encyclopaedia rather than a textbook. But besides that I really do think the modern order is easiest. The Greeks didn't have decimal places but now it's the way numbers are expressed even in junior schools. Dmcq (talk) 19:23, 15 February 2011 (UTC)
What do you mean when you say "modern order"Cliff (talk) 19:50, 15 February 2011 (UTC)
I just mean doing the real numbers rather than following the history and dealing with irrationals before getting to the real numbers. Children learn about expressing length using decimal numbers rather than as fractions and the proof of irrationality of the square root of two wouldn't be dealt with in a junior school. Dmcq (talk) 23:32, 15 February 2011 (UTC)
Still not sure what you mean. Do you support the addition of a section on Irrational numbers and how to identify them? Or do you support the status quo? Cliff (talk) 16:51, 28 March 2011 (UTC)

Transcendental numbers and reals

Is there a reason that the histories of these two different sets of numbers are discussed in the same section?Cliff (talk) 00:06, 15 February 2011 (UTC)


Table in classification section

The first three in the table describe sets of numbers, but the last three describe individual elements from the respective sets. If anyone can think about how to fix this, please do. I'll think about it. Cliff (talk) 20:34, 3 April 2011 (UTC)

Describing a general element from a set is one of the standard ways of describing a set see 'special sets' in set (mathematics).Dmcq (talk) 20:45, 3 April 2011 (UTC)
I'm not expressing confusion about what is going on, I'm expressing concern over the clarity of this article as it pertains to the "average" reader, not editor. Cliff (talk) 04:17, 4 April 2011 (UTC)