An Erdős-Kac theorem for Smooth and Ultra-smooth integers

An Erdős-Kac theorem for Smooth and Ultra-smooth integers

Marzieh Mehdizadeh Départment de Mathématiques et Statistique,Université de Montréal, CP 6128, succ. Centre-ville, Montréal, QC, Canada H3C 3J7. marzieh.mehdizadeh@gmail.com
Abstract.

We prove an Erdős-Kac type of theorem for the set . If is the number of prime factors of , we prove that the distribution of for is Gaussian for a certain range of using method of moments. The advantage of the present approach is that it recovers classical results for the range where , with a much simpler proof.

1. Introduction

For an integer , let denote the number of distinct prime divisors of . In 1940, Erdős and Kac [5] in their celebrated work studied the distribution of in the interval . The theorem states that for any real number , we have

(1)

where is the normal distribution function defined by

There are several proofs of Erdős-Kac Theorem. For instance, it has been proved by Billingsley  [2] and Granville and Soundararajan  [7] using the method of moments and sieve theory. Different variations of this theorem have been considered by several authors. In the present note, we shall study the Erdős-Kac theorem for smooth numbers. Recall that

is the set of smooth integers, where is defined as the largest prime factor of , with the convention . Also, recall that we set

The main goal of this result is to prove an analogue of (1) with the set in the range

(2)

where, as always,

Hildebrand [9], Alladi [1], and Hensley [8] have considered the distribution of prime divisors of smooth integers in different ranges of .
Hensley proved an Erdős-Kac type theorem when lies in the range

By using different method Alladi obtained an analogue of the Erdős-Kac Theorem for the following range

Later, Hildebrand extended previous results to include the range

which is a completion of Alladi and Hensley’s results.

Although (2) does not cover Alladi’s, Hensley’s and Hildebrand’s ranges, our applied method is completely different and much easier than the methods used by previous authors.
Our approach is based on the method of moments as Billinglsley used in [2]. We will introduce some approximately independent random variables, and by the Central Limit Theorem, we shall show that this random variables have a normal distribution, then by applying method of moments we get our desired result in (1).
The first step of the proof is to apply a truncation on number prime factors. This idea is from original proof of Erdős-Kac Theorem  [5].

For a given real number , set

then is a function that helps us to sieve out all primes exceeding , and we will show the contribution of sieved primes is negligible in understanding the distribution of . Before stating the main result, we begin introducing some notation. Let is the number of distinct prime divisors of a smooth number, namely

where is and according to the prime divides or not.
Let be the mean value of , more formally

and is the variance of , defined by


Now we are ready to state the main theorem.

Theorem 1.1.

For any real number , we have

(3)

holds in the range (2).

Theorem 1.1 is proved in Section . The proof relies on the method of moments and the estimates for .

Let

be the set of ultra-smooth integers whose canonical decomposition is free of prime powers exceeding , where

We define

We also have the following theorem

Theorem 1.2.

For any real number , we have

(4)

holds in the range (2).

The proof of Theorem 1.2 relies on the method of moments and the local behaviour of the function . By recalling  [10, Corollary 1.3.], for , we have

that is

Considering this relation between the local behaviour of and gives us a similar proof as Theorem 1.1, so we shall avoid proving this theorem.

Acknowledgement

I would like to thank Andrew Granville and Dimitris Koukoulopoulos for all their advice and encouragement as well as their valuable comments on the earlier version of the present paper. I am also grateful to Adam Harper, Sary Drappeau and Oleksiy Klurman for helpful conversations.

2. Preliminaries

Here we briefly recall some standard facts from probability theory (See Feller  [6] for more details) and we shall give a few important lemmas.

Remark 1.

If a random variable converges to in probability, particularly , then a second random variable (on the same probability space) tend to in distribution if and only if in distribution.


Remark 2.

If distribution function satisfying as , for , then for each .


Remark 3.

If for each x, and if is bounded in for some positive , then, .


Remark 4.

(A special case of the central limit theorem): If are independent and uniformly bounded random variables with mean and finite variance and if diverges then the distribution of converges to the normal distribution function.


By recalling  [4, Theorem 2.4.] for , and , we have

(5)

where and denotes the saddle point of the Perron’s integral for , which is the solution of the following equation

This function will play an important role in this work, so we briefly recall some fundamental facts about this function. By  [4, Lemma3.1], for any , we have the following estimate for

(6)

where is a unique real non-zero root of the equation

and when , we have

(7)


By  [4, Lemma 4.1], we have the following important estimate

Lemma 2.1.

(De la Breteche, Tenenbaum) For any , uniformly we have

(8)

Here we use a particular case of Lemma 2.1. If the range of is restricted to , we get

thus,

(9)


For , we define

By using the saddle point method, Tenenbaum and de la Breteche in  [3] obtained an estimate for the expectation and the variance of . First, we define

We state the following lemma from  [3].

Lemma 2.2.

(Tenenbaum, de la Breteche) we have uniformly for

(10)

We now study the expectation of , where .

Lemma 2.3.

If , then we have

Proof.

Let in Lemma 2.2, then we have

By using (9), we get

Now by letting , we have

and the proof is complete. ∎


Lemma 2.4.

If and , then we have

(11)
Proof.

We have

since is bounded. By the given estimate for in (6) and using Mertens’ estimate, we obtain

(12)

By applying the estimate of in (7), we get our desired result. ∎


Here we will introduce a truncated version of and in the following lemma and corollary we show that the contribution of large prime factors does not affect the expected value of number of prime factors of and hence the distribution of , when is small enough. We define

(13)

where

Lemma 2.5.

If , then we have

Proof.

By Lemma 2.4, we have

(14)

and we have our desired result. ∎

Now we define

In the following lemma we will show can be replaced by in the statement of Theorem 1.1.

Lemma 2.6.

Let , then we have

where denotes the probability value.

Proof.

We first find an estimate for , we have

Using Lemma 2.3 and 2.5, we get

(15)

For the variance of , using (15), we get

(16)

Now by Chebyshev’s inequality and using (16), we have

(17)

and we get our desired result. ∎

By the above Lemma and recalling Remark 1, the estimate in (4) is equivalent to the following

(18)

which we prove it in the next section.

3. Proof of Theorem 1.1

We begin this section by setting some random variables on a probability space and one variable for each prime , which satisfies

(19)

The random variables ’s are independent.

Now we define the partial sum as follows

where
By the definition of ’s and the estimate in (5) and (9), we deduce that has a mean value and variance of the order in the range , this means that and have roughly the same variance and the same mean value.

In the following lemma we get an upper bound for the difference of moments of and , where

Lemma 3.1.

If , then for any positive integer , we have

Proof.

By the definition of and , we have

and

So for the difference of moment, we have

(20)

Without loss of generality we assume that ’s are distinct, then by using the estimate (5), we have

The main terms in the above subtraction are the same and will be eliminated. Therefore,

(21)

If , then . So we can ignore the term . Thus,

We now use Lemma 2.5, and we get the following upper bound for each

(22)


Proof of Theorem 1.1.

We start our proof by normalizing the random variable . Define

By recalling the central limit theorem, one can say that has a normal distribution , since ’s are independent. We set

By using the method of moments, we will show that the moments of are very close to those corresponding sum and they both converge to the moment of normal distribution for every positive integer .
By the multinomial theorem, we have

(23)

By combining the upper bound in (22) with (23), we arrive to the following estimate

(24)

Now using Lemma 2.3, we have

(25)

Thus,

We showed that the difference of moments goes to for large values of . By the remark (2), we conclude that two random variables and have a same distribution.


By Remark 4, the random variable has a normal distribution. It remains to show that the moments of are very close to those of the normal distribution.
By recalling Remark 3, we need to prove that the moment are bounded in when increases.
In fact, we will show that for each

(26)

To complete the proof, we define the random variables , which are independent.
We have

(27)

Where is over -tuple , where are positive integers, and .
By the definition of , we have .
To avoid zero terms, we can assume that for each . Also we have . Thus,

Therefore, the value of inner sum in (27) is at most

Each is strictly greater than , and we have , therefore and this implies that

from which (26) follows.
We proved all necessary and sufficient conditions such that (18) and consequently (4) are true. ∎

References

  • [1] Krishnaswami Alladi. An Erdős-Kac theorem for integers without large prime factors. Acta Arith., 49(1):81–105, 1987.
  • [2] Patrick Billingsley. On the central limit theorem for the prime divisor functions. Amer. Math. Monthly, 76:132–139, 1969.
  • [3] R. de la Bretèche and G. Tenenbaum. Entiers friables: inégalité de Turán-Kubilius et applications. Invent. Math., 159(3):531–588, 2005.
  • [4] Régis de la Bretèche and Gérald Tenenbaum. Propriétés statistiques des entiers friables. Ramanujan J., 9(1-2):139–202, 2005.
  • [5] P. Erdös and M. Kac. The Gaussian law of errors in the theory of additive number theoretic functions. Amer. J. Math., 62:738–742, 1940.
  • [6] William Feller. An introduction to probability theory and its applications, volume 1. The name of the publisher, 3 edition, 1968.
  • [7] Andrew Granville and K. Soundararajan. Sieving and the Erdős-Kac theorem. In Equidistribution in number theory, an introduction, volume 237 of NATO Sci. Ser. II Math. Phys. Chem., pages 15–27. Springer, Dordrecht, 2007.
  • [8] Douglas Hensley. The distribution of among numbers with no large prime factors. In Analytic number theory and Diophantine problems (Stillwater, OK, 1984), volume 70 of Progr. Math., pages 247–281. Birkhäuser Boston, Boston, MA, 1987.
  • [9] Adolf Hildebrand. On the number of prime factors of integers without large prime divisors. J. Number Theory, 25(1):81–106, 1987.
  • [10] Gérald Tenenbaum. On ultrafriable integers. Q. J. Math., 66(1):333–351, 2015.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
264222
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description