TIL: apart from infinity (number, larger than any real number in absolute value) there are infinitesimal (number, less than any real number in absolute value and not a zero).
Edit: also, those infinitesimals were the subject of political and religious controversies in 17th century Europe, including a ban on infinitesimals issued by clerics in Rome in 1632.
Those clerics were ahead of their time. They probably would have banned large cardinals as well (infinites so large we can't prove whether or not they exist.)
Infinitesimals and infinities are nice if you're doing some sort of bucketing logic.
tiny = infinitesimal
huge = infinity
N = number to be bucketed
tiny <= N < 10 --> bucket 1
10 <= N < 20 --> bucket 2
...
X <= N < huge --> last bucket
This removes edge cases you need to test for if you're trying to bucket positive values. This may not be something you've had to do, but I've had reason to want this before on a few occasions.
Infinitesimals do not exist in the standard real number system. 'tiny' seems to be more related to the smallest positive representable (normal or subnormal) IEEE 754 float/double type of value which is a real number.
Infinitesimals tautologically exist in any finite representation of numbers. For floats, it's the smallest representable positive number. For integers of any type, it's 1.
As for how common they are, we learn about them in any introductory calculus course when defining derivatives. You come across the idea whenever discussing limits, if somewhat obliquely.
If I learned about it in high school math, and again in "real" math courses at my university, I'd say it's pretty standard.
I've never heard of that, and your definition of 1 as infinitesimal is incompatible withbits properties (infinitesimal + infinitesimal + infinitesimal is greater than a non-infinitesimal 2?!) and I don't see a mention on Wikipedia, and it goes against the plain read it of "in-finite-simal".
Also, you seem to be conflating "common" with "standard". "standard" is a mathematical term. Infinitesimal are handwavy in standard analysis (epsilon-delta are the rigorous alternative), but exist rigorously in nonstandard analysis.
I guess I'm using the wrong term, then. I often find it useful to have a concept of "smallest representable positive number," specifically for handling edge cases such as the one I gave up-thread. I see how that doesn't map to infinitesimal as defined in the shared link.
There are other instances where I've had a need for such a smallest positive number, where logic is simplified as opposed to checking for 0 in a special way. Whether there's an agreed upon term for that, I know where I've found value in programming tasks.
When I need such a thing, it is almost invariably in comparisons, so I am not doing arithmetic with multiple instances of that smallest representable positive number.
The theory of infinitesimals is intimately connected to how analysis (differential and integral calculus) was first formulated. Leibniz and Newton understood that you could approximate instant rates of change, or areas under a curve, by taking smaller and smaller "slices" of a function, but they did not yet have the rigorously formalized notion of limits that modern analysis depends on. So they developed an arithmetic of infinitesimals, numbers greater than zero but less than any positive real number¹, with some rather ad-hoc properties to make them work out in calculations.
Philosophical problems surrounding the perplexing concept of infinity were already hotly debated by the ancient Greeks. Aristotle made an ontological distinction between actual and potential infinities, and argued that actual infinity cannot exist but potential infinity can. This was also the consensus position of later scholars, and became a sticking point in the acceptance of calculus because infinitesimals (and infinite sums of them) were an example of the ontologically questionable actual infinities.
As I mentioned before, standard modern analysis is based on limits, not infinitesimals, and requires no extension of real numbers. Indeed the limit definition of calculus only requires the concept of potential infinities, so philosophers should be able to rest easy! But infinitesimals still occur in our notation which is largely inherited from Leibniz, however. We say that the derivative of y(x) is dy/dx, or the antiderivative of y(x) is ∫ y(x) dx, and while acknowledging that dy and dx are not actual mathematical objects, just syntax, we still do arithmetic on them whenever it's convenient to do so! For example, when we make a change of variables in an integral, we can substitute x = f(t) for some f, and then say dx/dt = f'(t) and "multiply by dt" to get dx = f'(t) dt to figure out what we should put in the place of the "dx" in the integral.
Actual infinitesimal numbers are not dead, either, they're used in a branch of analysis called nonstandard analysis which formalizes them in the logically rigorous manner that is now expected from mathematics.
________
¹ Not that they had a rigorous theory of real numbers, either, that came in the 19th and early 20th century. In fact what we now understand as formal, axiomatized math didn't really exist before the 19th century at all!
Edit: also, those infinitesimals were the subject of political and religious controversies in 17th century Europe, including a ban on infinitesimals issued by clerics in Rome in 1632.