One of the things about the book that caught my interest from the very beginning was its front cover. It has a peculiarly drawn grid of white boxes and red empty regions that looks quite interesting. Here is the grid from the front cover of the book:
Can we come up with a simple and elegant rule that defines this grid? Here is one I could come up with:
We define \( \gcd(x, y) \) to be a nonnegative common divisor of \( x \) and \( y \) such that every common divisor of \( x \) and \( y \) also divides \( \gcd(x, y). \) Let us now see if we can explain some of the interesting properties of this grid using the above rule:
When \( x = 0 \) and \( y \ne 1, \) we get \( \gcd(x, y) = \lvert y \rvert \ne 1, \) so the entire column at \( x = 0 \) has boxes except at \( (0, 1). \) Similarly, the entire row at \( y = 0 \) has boxes except at \( (1, 0). \)
The cell \( (0, 0) \) has a box because \( \gcd(0, 0) \ne 1. \) In fact, \( \gcd(0, 0) = 0. \) This follows from the definition of the \( \gcd \) function. We will discuss this in more detail later in this post.
Every diagonal cell \( (x, x) \) has a box except at \( (1, 1) \) because \( \gcd(x, x) = \lvert x \rvert \) for all integers \( x. \)
The grid is symmetric about the diagonal cells \( (x, x) \) because \( \gcd(x, y) = \gcd(y, x). \)
A column at \( x \) has exactly one cell below the diagonal if and only if \( x \) is prime. For example, check the column for \( x = 5. \) It has exactly one cell below the diagonal. We know that \( 5 \) is prime. Now check the column for \( x = 6. \) It has four cells below the diagonal. We know that \( 6 \) is not prime.
Let us now elaborate the second point in the list above. If \( \gcd(0, 0) \) is \( 0, \) then \( 0 \) must divide \( 0. \) Does \( 0 \) really divide \( 0? \) Isn't \( 0/0 \) undefined? Yes, even though \( 0/0 \) is undefined, \( 0 \) divides \( 0. \) We say an integer \( d \) divides an integer \( n \) when \( n = cd \) for some integer \( c. \) We have \( 0 = 0 \cdot 0, \) so indeed \( 0 \) divides \( 0. \)
We have shown that \( 0 \) divides \( 0 \) but we have not shown yet that \( \gcd(0, 0) = 0. \) Is \( \gcd(0, 0) \) really \( 0? \) Every integer divides \( 0, \) e.g., \( 1 \) divdes \( 0, \) \( 2 \) divides \( 0, \) \( 3 \) divides \( 0, \) etc. There does not seem to be a greatest common divisor of \( 0 \) and \( 0. \) Shouldn't \( \gcd(0, 0) \) be called either infinity or undefined? No, we need to look at the definition of \( \gcd \) introduced earlier. As per the definition, every common divisor of integers \( x \) and \( y \) must also divide \( \gcd(x, y). \) With this requirement in mind, we see that \( \gcd(0, 0) \) must be \( 0. \) This definition also makes \( \gcd(n, 0) = \gcd(0, n) = \lvert n \rvert \) for all integers \( n. \) Further, this definition makes Bézout's identity hold for all integers. Bézout's identity states that there exists integers \( m \) and \( n \) such that \( mx + ny = \gcd(x, y). \) Indeed if we have \( \gcd(0, 0) = 0, \) we get \( 0 \cdot 0 + 0 \cdot 0 = 0 = \gcd(0, 0). \)
That's all I wanted to share about the front cover of the book. While the front cover is quite interesting, the content of the book is even more fascinating. I found chapters 12 and 13 of the book to be the most interesting. In chapter 12, the book teaches how to prove that the Riemann zeta function \( \zeta(s) \) vanishes at every negative even integer \( s. \) Through several contour integrals and clever use of Cauchy's residue theorem, it shows in the end that \( \zeta(-2n) = 0 \) for \( n = 1, 2, 3, \dots. \) In chapter 13, the book shows us how to obtain zero-free regions where \( \zeta(s) \) does not vanish. The book exposes various subtle nuances of the zeta function with great rigour and thoroughness. Results like \( \zeta(-1) = 1/12 \) that once felt mysterious look crystal clear and obvious after working through this book. I strongly recommend this book to anyone who wants to learn analytic number theory.
]]>We have been reading the book Introduction to Analytic Number Theory by Apostol (1976) since March 2021. It has been going consistently since then and the previous few posts on this blog provide an account of how this journey has been so far. After about seven months of reading this book together, we are having our final meeting for this book today. This is going to be the 120th meeting of our book discussion group. The meeting notes from all previous reading sessions are archived at IANT Notes. We will discuss the final two pages of this book today and complete reading this book.
In the meeting today, we will look at some applications of the recursion formula related to partition functions that we learnt earlier. Here is an excerpt from the book that shows a specific example that demonstrates the richness and beauty of concepts one can discover while studying analytic number theory:
Equation (24) becomes \[ np(n) = \sum_{k=1}^n \sigma(k) p(n - k). \] a remarkable relation connecting a function of multiplicative number theory with one of additive number theory.
Now what equation (24) contains is not important for this post. Of course, you can refer to the book if you really want to know what equation (24) is. We learnt to prove that equation in the penultimate meeting for this subject yesterday. In this post, I will emphasise how indeed this equation is remarkable.
The divisor sum function \( \sigma(n) \) represents the sum of all positive divisors of \( n. \) Here are some examples: \begin{align*} \sigma(1) &= 1, \\ \sigma(2) &= 1 + 2 = 3, \\ \sigma(3) &= 1 + 3 = 4, \\ \sigma(4) &= 1 + 2 + 4 = 7, \\ \sigma(5) &= 1 + 5 = 6. \end{align*} We have spent a good amount of time with this function in the initial chapters of the book. However, for the purpose of this blog post, the definition and the examples above are good enough.
The \( p(n) \) function is the unrestricted partition function. It represents the number of ways \( n \) can be written as a sum of positive integers \( \le n. \) Further, we let \( p(0) = 1. \) Here are some examples: \begin{align*} p(1) &= 1, \\ p(2) &= 2, \\ p(3) &= 3, \\ p(4) &= 4, \\ p(5) &= 7. \end{align*} Let me illustration the last value. The integer \( 5 \) can be represented as a sum of positive integers \( \le 5 \) in 7 different ways. They are: \( 5, \) \( 4 + 1, \) \( 3 + 2, \) \( 3 + 1 + 1, \) \( 2 + 2 + 1, \) \( 2 + 1 + 1 + 1, \) and \( 1 + 1 + 1 + 1 + 1. \) Thus \( p(n) = 5. \)
The divisor sum function comes from multiplicative number theory. The partition function comes from additive number theory. Yet these two very different things get linked together in the formula mentioned in the excerpt included above. Here is the formula once again: \[ np(n) = \sum_{k=1}^n \sigma(k) p(n - k). \] How beautiful! How nicely the divisor sum function and the unrestricted partition function appear together elegantly in a single equation! Further, this equation provides a recursion formula for the partition function. Here is an illustration of this equation with \( n = 5 \): \[ 5 \cdot p(5) = 5 \cdot 7 = 35. \] \begin{align*} \sum_{k=1}^5 \sigma(k) p(5 - k) &= \sigma(1) p(4) + \sigma(2) p(3) + \sigma(3) p(2) + \sigma(4) p(1) + \sigma(5) p(0) \\ &= (1)(5) + (3)(3) + (4)(2) + (7)(1) + (6)(1) \\ &= 5 + 9 + 8 + 7 + 6 \\ &= 35. \end{align*} We will go through this topic once more in the meeting today, so if you are interested to see this formula worked out in a step-by-step manner, do join our final meeting for this book.
The final meeting is coming up at 17:00 UTC today. Visit the analytic number theory page to get the meeting link. This is not going to be the final meeting for our overall book discussion group though. This is going to be the finally meeting for only the analytic number theory book. We will have more meetings for another book after a short break.
The meeting today is going to be a lightweight session. The last two pages that we will discuss today contain some examples of recursion formulas and some commentary about Ramanujan's partition identities. Most of it should make sense even to those who have not been part of our meetings earlier, so everyone is welcome to join this meeting today, even if only to lurk. You can also join our group by joining our IRC channel where we will publish updates about future meetings. Our channel details are available in the main page here.
A big thank you to the Hacker News community and the Libera IRC mathematics and algorithms communities who showed interest in these meetings, joined the meetings, and made this series of meetings successful.
]]>After 114 meetings and 75 hours of studying together, our analytic number theory book discussion group has finally reached the final chapter of the book Introduction to Analytic Number Theory by Apostol (1976). We have less than 18 pages to read in order to complete reading this book. Considering that we meet 3-4 times in a week and we discuss about 2-3 pages in every meeting, it appears that we would be able to complete reading this book in another 2 weeks.
Reading this book has been quite a journey! The previous three posts on this blog provide an account of how this journey has been. It has been fun, of course. The best part of hosting a book discussion group like this has been the number of extremely smart people I got an opportunity to meet and interact with. The insights and comments on the study material that others shared during the meetings were very helpful.
The meeting log shows that our meetings started really small with only 4 participants in the first meeting in March 2021 and then it gradually grew to about 10-12 regular members within a month. Then a few months later, the number of participants began dwindling a little. This happened because some members of the group had to drop out as they got busy with other personal or professional engagements. However, six months later, we still have about 4-5 regular participants meeting consistently. I think it is pretty good that we have made it this far.
The final chapter on integer partitions is very unlike all the previous 12 chapters. While the previous chapters dealt with multiplicative number theory, this final chapter deals with additive number theory. For example, the first theorem talks about an interesting property of unrestricted partitions. We study the number of ways a positive integer can be expressed as a sum of positive integers. The number of summands is unrestricted, repetition of summands is allowed, and the order of the summands is not taken into account. For example, the number 3 has 3 partitions: 3, 2 + 1, and 1 + 1 + 1. Similarly, the number 4 has 5 partitions: 4, 3 + 1, 2 + 2, 2 + 1 + 1, and 1 + 1 + 1 + 1.
I have always wanted to learn about partitions more deeply, so I am quite happy that this book ends with a chapter on partitions. The subject of partitions is rich with very interesting results obtained by various accomplished mathematicians. In the book, the first theorem about partitions is a very simple one that follows from the geometric representation of partitions. Let us see an illustration first.
How many partitions of 6 are there? There are 11 partitions of 6. They are 6, 5 + 1, 4 + 2, 4 + 1 + 1, 3 + 3, 3 + 2 + 1, 3 + 1 + 1 + 1, 2 + 2 + 2, 2 + 2 + 1 + 1, 2 + 1 + 1 + 1 + 1, and 1 + 1 + 1 + 1 + 1 + 1. Now how many of these partitions are made up of 5 parts? Each summand is called a part. The answer is 2. There are 2 partitions of 6 that are made up of 5 parts. They are 3 + 1 + 1 + 1 and 2 + 2 + 1 + 1. Let us represent both these partitions as arrangements of lattice points. Here is the representation of the partition 3 + 1 + 1 + 1:
• • •
•
•
•
Now if we read this arrangement from left-to-right, column-by-column, we get another partition of 6, i.e., 4 + 1 + 1. Note that the number of parts in 3 + 1 + 1 + 1 (i.e., 4) appears as the largest part in 4 + 1 + 1. Similarly, the number of parts in 4 + 1 + 1 (i.e., 3) appears as the largest part in 3 + 1 + 1 + 1. Let us see one more example of this relationship. Here is the geometric representation of 2 + 2 + 1 + 1:
• •
• •
•
•
Once again, reading this representation from left-to-right, we get 4 + 2, another partition of 6. Once again, we can see that the number of partitions in 2 + 2 + 1 + 1 (i.e., 4) appears as the largest part in 4 + 2, and vice versa. These observations lead to the first theorem in the chapter on partitions:
Theorem 14.1 The number ofpartitions of \( n \) into \( m \) parts is equal to the number of partitions of \( n \) into parts, the largest of which is \( m. \)
That was a brief introduction to the chapter on partitions. In the next two or so weeks, we will dive deeper into the theory of partitions.
If this blog post was fun for you, consider joining our next meeting. Our next meeting is on Tue, 21 Sep 2021 at 17:00 UTC. Since we are at the beginning of a new chapter, it is a good time for new participants to join us. It is also a good time for members who have been away for a while to join us back. Since this chapter does not depend much on the previous chapters, new participants should be able to join our reading sessions for this chapter and follow along easily without too much effort.
To join our discussions, see our channel details in the main page here. To get the meeting link for the next meeting, visit the analytic number theory book page.
It is worth mentioning here that lurking is absolutely fine in our meetings. In fact, most participants of our meetings join in and stay silent throughout the meeting. Only a few members talk via audio/video or chat. This is considered absolutely normal in our meetings, so please do not hesitate to join our meetings!
]]>The book I had chosen for our discussions was Introduction to Analytic Number Theory by Apostol (1976). I have been hosting 40-minute meetings for about 3-4 days every week since March 2021. We discuss a couple of pages of the book in every meeting. Most participants in this meeting are from Hacker News and Libera IRC network. For a long time, I was eager to learn the proof of the prime number theorem. For those unfamiliar with the theorem, I will describe it briefly in further sections. Let me first answer the question I asked in the previous paragraph.
So how long does it take to start with no knolwedge of analytic number theory and teach ourselves the analytic proof of the prime number theorem? Turns out, it takes 72 hours! It took our group 72 hours spread across 110 meetings over 6 months to be able to understand the proof. It is worth noting here that most of us in this group have full-time jobs and other personal obligations! We were all doing this for fun, for the joy of learning!
Now I must mention that the 72 hours noted above is only the time spent together in reading the book and working through the theorems and proofs. It does not include the personal time spent in solving problems, reading some sections again, taking notes, etc. All of that was done in our personal time. We did discuss the solutions to some of the very interesting problems in our meetings just to take a break from the theorem-and-proof style of reading but most of these 72 hours of meetings focussed on working through the theorems and proofs in the book.
It may be possible to achieve this milestone in lesser number of hours, perhaps by reading the book alone which for some folks might be faster than studying in a group, or perhaps by skipping some chapters for topics that look very familiar. In our discussions, however, we did not skip any chapter. There were in fact a few chapters we could have skipped. All members of these meetings were very familiar with divisibility, greatest common divisor, the fundamental theorem of arithmetic, etc. discussed in Chapter 1. Most of us were also very familiar with the concepts discussed in Chapter 5 such as congruences, residue classes, the Euler-Fermat theorem, the Chinese remainder theorem, etc. Despite being familiar with these concepts, we decided not to skip any chapter for the sake of completeness of our coverage of the material. In fact, we read every single line of the book and deliberated over every single concept discussed in the book. With this detailed and tedious approach to reading the book, it took us 72 hours to read about 290 pages and learn the analytic proof of the prime number theorem in Chapter 13.
The prime number theorem is a very curious fact about the distribution of prime numbers that Gauss noticed in the year 1792 when he was about 15 years old. He noticed that the occurrence of primes become rarer and rarer as we expand our search for them to larger and larger integers. For example, there are 4 primes between 1 and 10, i.e., 40% of the numbers between 1 and 10 are primes! But there are only 25 primes between 1 and 100, i.e., only 25% of the numbers between 1 and 100 are primes. If we go up to 1000, we notice that there are only 168 primes between 1 and 1000, i.e., only 16.8% of the numbers between 1 and 1000 are primes. Formally, we denote these facts with the mathematical notation \( \pi(x) \) that denotes the prime counting function. We say \( \pi(10) = 4, \) \( \pi(100) = 25, \) \( \pi(1000) = 168, \) and so on. Note that we allow \( x \) to be a real number, so while \( \pi(10) = 4, \) we have \( \pi(10.3) = 4 \) as well. One of the reasons we let \( x \) be a real number in the definition of \( \pi(x) \) is because it makes various problems we come across during the study of this function more convenient to work on using real analysis.
We observe that the "density" of primes continue to fall as we make \( x \) larger and larger. In formal notation, we see that the ratio \( \pi(x) / x \) is \( 0.4 \) when \( x = 10. \) This ratio falls to \( 0.25 \) when \( x = 100. \) It falls further to \( 0.168 \) when \( x = 1000, \) and so on. Can we predict by how much this "density" falls? The answer is, yes, and that leads us to the prime number theorem. The prime number theorem states that \( \pi(x) / x \) is asymptotic to \( 1 / \log x \) as \( x \) approaches infinity, i.e., \[ \frac{\pi(x)}{x} \sim \frac{1}{\log x} \text{ as } x \to \infty. \] For those unfamiliar with the notation of asymptotic equality, here is another equivalent way to state the above relationship, \[ \lim_{x \to \infty} \frac{\pi(x) / x}{1 / \log x} = 1. \] We could also write this as \[ \lim_{x \to \infty} \frac{\pi(x)}{x / \log x} = 1 \] or \[ \pi(x) \sim \frac{x}{\log x} \text{ as } x \to \infty. \] Let us see how well this formula works as an estimate for the density of primes for small values of \( x. \)
\( x \) | \( \pi(x) \) | \( x / \log x \) |
---|---|---|
10 | 4 | 4.3 |
100 | 25 | 21.7 |
1000 | 168 | 144.8 |
10000 | 1229 | 1085.7 |
100000 | 9592 | 8685.9 |
Not bad! In fact, the last two columns begin to agree more and more as \( x \) becomes larger and larger.
The analytic proof of the prime number theorem was achieved with an intricate chain of equivalences and implications between various theorems. The book consumes 13 chapters and 290 pages before completing the proof of the prime number theorem. Each page is also quite dense with information. The amount of commentary or illustrations is very little in the book. Most of the book keeps alternating between theorem statements and proofs. Occasionally, for especially long chapters with an intricate sequence of proofs, Apostol provides a plan of the proof in the introductions to such chapters. It is quite hard to summarise a large and dense volume of work like this in a blog post but I will make an attempt to paint a very high-level picture of some of the key concepts that are involved in the proof.
Everything from Chapters 1 to 3 is about building basic concepts and tools we will use later to work on the problem of the prime number theorem. These concepts and tools were very interesting on their own. They involved divisibility, various number-theoretic functions, Dirichlet products, the big oh notation, etc. Chapter 4 was the first chapter where we engaged ourselves with the prime number theorem. This chapter taught us several other formulas that were logically equivalent to the prime number theorem. One equivalence that would play a big role later was the equivalence between the prime number theorem \[ \lim_{x \to \infty} \frac{\pi(x) \log x}{x} = 1 \] and the following form: \[ \lim_{x \to \infty} \frac{\psi(x)}{x} = 1. \] If we could prove one, the validity of the other would be established automatically. The notation \( \psi(x) \) denotes the Chebyshev function which in turn is defined in terms of the Mangoldt function \( \Lambda(n) \) as \( \psi(x) = \sum_{n \le x} \Lambda(n). \) Note that the formula above can also be stated using the asymptotic equality notation as follows: \[ \psi(x) \sim x \text{ as } x \to \infty. \] There were several other equivalent forms too shown in Chapter 4. The fact that all these various forms were equivalent to each other was rigorously proved in the chapter. Thus proving any one of the equivalent forms would be sufficient to prove the prime number theorem. But in Chapter 4, we did not know how to prove any of the equivalent forms. We could only prove the equivalence of the various formulas, not the formulas themselves. We only learnt that if any of the equivalent forms is true, so is the prime number theorem. Similarly, if any of the equivalent forms is false, so is the prime number theorem. We would visit the prime number theorem again in Chapter 13 which would complete the proof of the prime number theorem by showing that the equivalent form mentioned above is indeed true.
Chapters 5 to 10 introduced more concepts involving congruences, finite abelian groups, their characters, Dirichlet characters, Dirichlet's theorem on primes in arithmetic progressions, Gauss sums, quadratic residues, primitive roots, etc. Some of these concepts would turn out to be very important in proving the prime number theroem but most of them probably are not too important if understanding the proof of the prime number theorem is the only goal. Regardless, all of these chapters were very interesting.
It was in Chapters 11 and 12 that we felt that we were getting closer and closer to the proof of the prime number theorem. Chapter 11 began a detailed and rigorous study of convergence and divergence of Dirichlet series. The Riemann zeta function is a specific type of Dirichlet series. Chapter 12 introduced analytic continuation of the Riemann zeta function. We could then show interesting results like \( \zeta(0) = -1/2 \) and \( \zeta(-1) = -1/12 \) using the analytic continuation of the zeta function. This chapter also showed us why all trivial zeroes of \( \zeta(s) \) must lie at negative even integers.
One thing I realised during the study of this book is how frequently we use concepts, operations, functions, and theorems named after Dirichlet. It was impossible to get through a meeting without having uttered "Dirichlet" at least a dozen times!
Finally, Chapter 13 showed us how to prove the prime number theorem. The plan of the proof was laid out in the first section. Our goal in this chapter is to prove that \( \psi(x) \sim x \) as \( x \to \infty. \) This is equivalent to the prime number theorem, so proving this amounts to proving the prime number theorem too.
Next we learn that the asymptotic relation \( \psi_1(x) \sim x^2 / 2 \) as \( x \to \infty \) implies the previous asymptotic relationship. Here \( \psi_1(x) \) is defined as \( \psi_1(x) = \int_1^x \psi(t) \, dt. \) This implication is proved quite easily in one and a half pages. But we still need to show that the asymptotic relation \( \psi_1(x) \sim x^2 / 2 \) as \( x \to \infty \) indeed holds good. Proving this takes a lot of work. To prove this asymptotic relation we first learn to arrive at the following equation involving a contour integral: \[ \frac{\psi_1(x)}{x^2} - \frac{1}{2} \left( 1 - \frac{1}{x} \right)^2 = \frac{1}{2\pi i} \int_{c - \infty i}^{c + \infty i} \frac{x^{s - 1}}{s(s + 1)} \left( -\frac{\zeta'(s)}{\zeta(s)} - \frac{1}{s - 1} \right) \, ds \] for \( c > 1. \) The equation above looks quite complex initially but each part of it becomes friendly as we learn to derive it and then work on each part of it while working out further proofs. Now if we could somehow show that the integral on the right hand side of the above equation approaches 0 as \( x \to \infty, \) that would end up proving the asymptotic relation involving \( \psi_1(x) \) and thus end up proving the prime number theorem by equivalence. However, proving that this integral indeed becomes 0 as \( x \to \infty \) requires a careful study of \( \zeta(s)/\zeta'(s) \) in the vicinity of the line \( \operatorname{Re}(s) = 1. \) This is the topic that most of the chapter deals with.
This plan of the proof looked quite convoluted initially but Apostol has done a great job in this chapter to first walk us through this plan and then prove each fact that we need to make the proof work in a detailed and rigorous manner. When we reached the end of the proof, one of our regular members remarked, "Now the proof does not look so complex!"
Would the elementary proof of the prime number theory have been easier? I don't know. I have not studied the elementary proof. But Apostol does say this at the beginning of Chapter 13,
The analytic proof is shorter than the elementary proof sketched in Chapter 4 and its principal ideas are easier to comprehend.
Learning the analytic proof itself was quite a long journey that required dedication and consistency in our studies over a period of 6 months. If we trust the above excerpt from the book, then I think it is fair to assume that the elementary proof is even more formidable.
That was an account of our journey through an analytic number theory book from its first chapter up to the analytic proof of the prime number theorem. We have not completed reading the entire book though. We still have about another 30 pages to go through. In the remaining study of this book, we will learn more about zero-free regions for \( \zeta(s), \) the application of the prime number theorem to the divisor function, and the Euler totient function. The next and the final chapter too has a lot to offer such as integer partition, Euler's pentagonal-number theorem, and the partition identities of Ramanujan. I am pretty hopeful that we will be complete reading this book in another few weeks of meetings.
]]>In this blog post, I will talk about my personal experience hosting these meetings and my personal journey about reading this book. It is worth keeping in mind then that what I am about to write below may not have any resemblance with the experience of other participants of these meetings.
As far as I know, everyone who joins our meetings are involved in computer programming in one form or another. A few of them have very strong background in mathematics. I host these meetings everyday and discuss a few sections of the book in detail. I show how to work through the proofs, explain some of the steps, etc. Sometimes I get stuck in some step that I find too unobvious. Sometimes the steps are obvious but my brain is too slow to understand why the steps work. But these tiny glitches have not been a problem so far, thanks to all the members who join these meetings on a daily basis and contribute their explanations of the proofs.
I believe the group members are the best part of these discussions. Thanks to the insights and explanation of the reading material shared by all these members, I am fairly confident that we are able to take a close look at every proof and convince ourselves that every step of the proofs work.
The first web meeting to discuss the chosen analytic number theory book occurred on 5 Mar 2021. See the blog post Reading Classic Computation Books to read about the early days of our group and how it was formed. Back then, I knew little to nothing about analytic number theory. Although I was familiar with some of the elementary concepts like divisibility, Euler's totient function, modular arithmetic, calculus, and related theorems, chapter 2 of the book itself proved to be a significant challenge for me. In the second chapter, it became clear to me that we will be building new levels of mathematical abstractions, use these abstractions to build yet another layer of abstractions, and so on. The chapter began with a description of the Möbius function, a very neat and interesting function that I was previously unaware of. That was fun! But soon, this chapter began adding new layers of abstractions such as Dirichlet product, Dirichlet inverse, generalised convolution, etc. I could almost feel my brain stretching and growing as we went through each page of this chapter.
I often saw that after I have learnt a new concept in a chapter, it would not become intuitive immediately. I would understand the concepts, understand the related theorems, understand each step of the proofs, solve exercise problems, know how to apply the theorems when needed, and yet I could not "feel" them. I wanted to not just understand the concepts but I also wanted to "feel" the concepts like the way I could feel algebra, calculus, computer programming, etc. In the initial days, I wondered if I was too old to develop good intuition for all these new and highly sophisticated concepts.
Despite always feeling that all these concepts were too technical and quite unintuitive, I kept going. I kept hosting these discussions with a frequency of about 3-5 days every week. We continued discussing the various chapters and the proofs in them. And then suddenly one day while reading chapter 4, something interesting happened. As we were employing Dirichlet products to obtain some useful results, I realised that the concept of Dirichlet products which once felt so foreign two chapters earlier, now felt completely intuitive. I could see different functions being equivalent to Dirichlet products intuitively and effortlessly. Dirichlet products felt no more alien than, say, arithmetic multiplication. I could "feel" it now. It was a great feeling. I realised that sometimes it might take a few additional chapters of reading and using those concepts over and over again before they really begin to feel intuitive.
In this section, I will pick three interesting concepts from different parts of the book to provide a glimpse of what the journey has been like. These three things occur in the book again and again and play a very important role in several chapters of the book. Of course, it goes without saying that there are many interesting concepts in the book and many of them may be more important than the ones I am about to show below.
For any positive integer \( n, \) the Möbius function \( \mu(n) \) is defined as follows: \[ \mu(1) = 1; \] If \( n > 1, \) write \( n = p_1^{a_1} \dots p_k^{a_k} \) (prime factorisation). Then \begin{align*} \mu(n) & = (-1)^k \text{ if } a_1 = a_2 = \dots = a_k = 1, \\ \mu(n) & = 0 \text{ otherwise}. \end{align*} If \( n \ge 1, \) we have \[ \sum_{d \mid n} \mu(d) = \begin{cases} 1 & \text{ if } n = 1, \\ 0 & \text{ if } n > 1. \end{cases} \]
I was unfamiliar with this function prior to reading the book. It felt like a nice little cute function initially but as we went through more chapters, it soon became clear that this function plays a major role in analytic number theory.
As a simple example, we will soon see in this post that the Euler's totient function can be expressed as a Dirichlet product of the Möbius function and the arithmetical function \( N(n) = n. \)
As a more sophisticated example, the Dirichlet series with coefficients as the Möbius function is the multiplicative inverse of the Riemann zeta function, i.e., if \( s = \sigma + it \) is a complex number with its real part \( \sigma > 1, \) we have \[ \sum_{n=1}^{\infty} \frac{\mu(n)}{n^s} = \frac{1}{\zeta(s)}. \] This immediately shows that \( \zeta(s) \ne 0 \) for \( \sigma > 1. \)
If \( f \) and \( g \) are two arithmetical functions, their Dirichlet product \( f * g \) is defined as: \[ (f * g)(n) = \sum_{d \mid n} f(d) g\left( \frac{n}{d} \right). \] Dirichlet products appear to pop up magically at various places in number theory. Here is a simple example: \[ \varphi(n) = \sum_{d \mid n} \mu(d) \frac{n}{d}. \] Therefore in the notation of Dirichlet products, the above equation can also be written as \[ \varphi = \mu * N \] where \( N \) represents the arithmetical function \( N(n) = n \) for all \( n. \)
For complex numbers \( s = \sigma + it, \) the Hurwitz zeta function \( \zeta(s, a) \) is initially defined for \( \sigma > 1 \) as \[ \zeta(s, a) = \sum_{n=0}^{\infty} \frac{1}{(n + a)^s} \] where \( a \) is a fixed real number, \( 0 < a < 1. \) Then by analytic continuation, it is defined for \( \sigma \le 1 \) as \[ \zeta(s, a) = \Gamma(1 - s)I(s, a) \] where \( \Gamma \) represents the gamma function \[ \Gamma(s) = \int_0^{\infty} x^{s - 1} e^{-x} \, dx \] defined for \( \sigma > 0 \) and also defined, by analytic continuation, for \( \sigma \le 0 \) except for \( \sigma = 0, -1, -2, \dots \) (the nonpositive integers) and \( I(s, a) \) is defined by the contour integral \[ I(s, a) = \frac{1}{2\pi i} \int_C \frac{z^{s-1} e^{az}}{1 - e^z} \, dz \] where \( 0 < a \le 1 \) and the contour \( C \) is a loop around the negative real axis composed of three parts \( C_1, \) \( C_2, \) and \( C_3 \) such that for \( c < 2\pi, \) we have \( z = re^{-\pi i} \) on \( C_1 \) and \( z = re^{\pi i} \) on \( C_3 \) as \( r \) varies from \( c \) to \( +\infty, \) and \( z = ce^{i \theta} \) on \( C_2, \) \( -\pi \le \theta \le \pi. \)
Now admittedly, the definition or the analytic continuation of Hurwitz zeta function may seem very heavy and obscure to the uninitiated and it is indeed quite heavy. It takes 6 pages in chapter 12 to build the prerequisite concepts before we arrive at this definition. It is evident that this definition uses other concepts like the gamma function, a specific contour integral, etc. and it is only natural to expect that one has to gain sufficient expertise with the gamma function and contour integrals before the Hurwitz zeta function begins to feel intuitive.
But once we have established the analytic continuation of the Hurwitz zeta function, many insightful facts about the Riemann zeta function follow readily. It is easy to see that the Riemann zeta function can be defined in terms of the Hurwitz zeta function as \[ \zeta(s) = \zeta(s, 1) = \sum_{n=1}^{\infty} \frac{1}{n^s}. \] Yes, the \( \zeta \) symbol is overloaded: \( \zeta(s, a) \) is the Hurwitz zeta function whereas \( \zeta(s) \) is the Riemann zeta function. This relationship between the Riemann zeta function and the Hurwitz zeta function along with the analytic continuation of the Hurwitz zeta function opens new doors into the wonderful world of complex numbers and let us obtain beautiful and profound facts about the Riemann zeta function such as the fact that it has zeros at negative even integers, i.e., \( \zeta(n) = 0 \) for \( n = -2, -4, -6, \dots \) and the fact that \( \zeta(0) = -\frac{1}{2} \) and \( \zeta(-1) = -\frac{1}{12} \) and so on.
I believe beautiful results like these obtained by digging deep into complex analysis are what makes the study of analytic number theory so rewarding.
The next meeting is coming up today in a few hours. Are we planning anything special for the 100th meeting?
I think the 100th meeting is a significant milestone in our journey of understanding the beautiful and interesting gems hidden away in the subject of analytic number theory. This milestone has been possible only due to the sustained curiousity and eagerness among the members of the group to learn a significant area of mathematics and learn it well. We have reached this milestone successfully due to the passion and love for mathematics that drive the regular members to join these meetings and go through a few pages of the book everyday. In these meetings, we have read 12 chapters consisting of over 250 pages so far. Many of us knew nothing about analytic number theory merely five months ago and now we can appreciate the Riemann zeta function at a deeper level. We now understand what the Riemann hypothesis really means. This has been a great journey so far.
Despite being a significant milestone and cause for celebration, we are going to keep our 100th meeting fairly simple. We will continue where we left off yesterday. Today we have some more relationships between the gamma function and the Riemann zeta function to go through, so that is what we will do. We will also show that \( \zeta(0) = -\frac{1}{2} \) and \( \zeta(-1) = -\frac{1}{12} \) using the analytic continuation of the Hurwitz zeta function today.
If this blog post was fun for you and you would like to join our meetups, please go through this page to get the meeting link and join us.
]]>Important numbers in the proof: \[ 0, \quad \underbrace{[y]}_{=\,m}, \quad y, \quad \underbrace{[y] + 1}_{=\,m + 1}, \quad \underbrace{[x]}_{=\,k}, \quad x. \] Splitting the definite integral: \[ \int_y^x f(t)\,dt = \int_{y}^{[y] + 1} f(t)\,dt + \underbrace{\int_{[y] + 1}^{[y] + 2} f(t)\,dt + \dots + \int_{[x] - 1}^{[x]} f(t)\,dt}_{=\,\int_{[y] + 1}^{[x]} f(t)\, dt} + \int_{[x]}^{x} f(t)\,dt. \] Using the more convenient variables \( m \) and \( k, \) we get: \[ \int_y^x f(t)\,dt = \int_m^{m + 1} f(t)\,dt + \underbrace{\int_{m + 1}^{m + 2} f(t)\,dt + \dots + \int_{k - 1}^{k} f(t)\,dt}_{=\,\int_{m + 1}^{k} f(t)\, dt} + \int_{k}^{x} f(t)\,dt. \]
\begin{align*} \int_{m + 1}^{k} [t] f'(t) dt & = \int_{m + 1}^{m + 2} [t] f'(t) dt + \int_{m + 2}^{m + 3} [t] f'(t) dt + \dots + \int_{k - 1}^{k} [t] f'(t) dt \\ & = \begin{aligned}[t] & (m + 2) f(m + 2) - (m + 1) f(m + 1) - f(m + 2) \\ + & (m + 3) f(m + 3) - (m + 2) f(m + 2) - f(m + 3) \\ & \dots \\ + & (k) f(k) - (k - 1) f(k - 1) - f(k) \end{aligned} \\ & = kf(k) - (m + 1)f(m + 1) - \sum_{n=m + 2}^{k} f(n) \\ & = kf(k) - mf(m + 1) - f(m + 1) - \sum_{n=m + 2}^{k} f(n) \\ & = kf(k) - mf(m + 1) - \sum_{n=m + 1}^{k} f(n) \\ & = kf(k) - mf(m + 1) - \sum_{y < n \le x} f(n). \end{align*}
\begin{align*} \sum_{y < n \le x} f(n) & = - \int_{m + 1}^k [t] f'(t) \, dt + k f(k) - m f(m + 1) \\ & = \begin{aligned}[t] & \left( - \int_y^{m + 1} [t] f'(t) \, dt - \int_{m + 1}^k [t] f'(t) \, dt - \int_k^x [t] f'(t) \, dt \right) \\ & + f(k) - m f(m + 1) + \int_y^{m + 1} [t] f'(t) \, dt + \int_k^x [t] f'(t) \, dt \end{aligned} \\ & = - \int_y^x [t] f'(t) \, dt + k f(k) - m f(m + 1) + \int_y^{m + 1} m f'(t) \, dt + \int_k^x k f'(t) \, dt \\ & = - \int_y^x [t] f'(t) \, dt + k f(k) - m f(m + 1) + \biggl( m f(m + 1) - m f(y) \biggr) + \biggl( k f(x) - k f(k) \biggr) \\ & = - \int_y^x [t] f'(t) \, dt + k f(x) - m f(y). \end{align*}
Integration by parts: \[ \int uv \, dt = u \int v \, dt - \int u' \left( \int v \, dt \right) \, dt. \] \[ \int_y^x t f'(t) \, dt = \left. \left( t f(t) - \int f(t) \, dt \right) \right|_y^x = x f(x) - y f(y) - \int_y^x f(t) \, dt. \] Final step of the proof: \begin{align*} \sum_{y < n \le x} f(n) & = -\int_y^x [t] f'(t) \, dt + k f(x) - m f(y) \\ & = \begin{aligned}[t] & -\int_y^x [t] f'(t) \, dt + [x] f(x) - [y] f(y) \\ & + \underbrace{ \left( \int_y^x t f'(t) \, dt - x f(x) + y f(y) + \int_y^x f(t) \, dt \right)}_{0 \text{ by above definite integral}} \end{aligned} \\ & = \int_y^x f(t) \, dt + \int_y^x (t - [t]) f'(t) \, dt + f(x)([x] - x) - f(y)([y] - y). \end{align*}
Splitting definite integral: \begin{align*} & \int_1^{\infty} f(t) \, dt = \int_1^{x} f(t) \, dt + \int_x^{\infty} f(t) \, dt \\ & \iff \int_1^{\infty} f(t) \, dt - \int_x^{\infty} f(t) \, dt = \int_1^x f(t) \, dt. \end{align*} Solving improper integral: \[ \int_x^{\infty} \frac{1}{t^2} \, dt = \lim_{b \to \infty} \int_x^b \frac{1}{t^2} dt = \lim_{b \to \infty} \frac{-1}{t} \Biggr|_x^b = \left( \lim_{b \to \infty} \frac{-1}{b} \right) + \frac{1}{x} = 0 + \frac{1}{x} = \frac{1}{x}. \]
Definition of Euler's constant: \[ C = \lim_{n \to \infty} \left( 1 + \frac{1}{2} + \frac{1}{3} + \dots + \frac{1}{n} - \log n \right) = \lim_{x \to \infty} \left( \sum_{n \le x} \frac{1}{n} - \log x \right). \] We begin with \[ \sum_{n \le x} \frac{1}{n} = \log x + \underbrace{1 - \int_1^{\infty} \frac{t - [t]}{t^2} \, dt}_{\text{We will show below that this is \( C \)}} + O\left( \frac{1}{x} \right). \] Rearranging the terms, we get \[ \sum_{n \le x} \frac{1}{n} - \log x = 1 - \int_1^{\infty} \frac{t - [t]}{t^2} \, dt + O\left( \frac{1}{x} \right). \] Using the definition of \( C, \) we get \begin{align*} C & = \lim_{x \to \infty} \left( \sum_{n \le x} \frac{1}{n} - \log x \right) \\ & = \lim_{x \to \infty} \left( 1 - \int_1^{\infty} \frac{t - [t]}{t^2} \, dt + O\left( \frac{1}{x} \right) \right) \\ & = 1 - \int_1^{\infty} \frac{t - [t]}{t^2} \, dt. \end{align*}
\[ \int_1^x \frac{dt}{t^s} = \frac{t^{-s + 1}}{-s + 1} \Biggr|_1^x = \frac{t^{1 - s}}{1 - s} \Biggr|_1^x = \frac{x^{1 - s}}{1 - s} - \frac{1}{1 - s}. \] \[ \int_1^x \frac{t - [t]}{t^{s + 1}} \, dt = \int_1^{\infty} \frac{t - [t]}{t^{s + 1}} \, dt - \int_x^{\infty} \frac{t - [t]}{t^{s + 1}} \, dt = \int_1^{\infty} \frac{t - [t]}{t^{s + 1}} \, dt + \underbrace{\frac{1}{s} O\left( x^{-s}\right)}_{\text{explained below}}. \] \[ 0 \le \int_x^{\infty} \frac{t - [t]}{t^{s + 1}} \, dt \le \int_x^{\infty} \frac{1}{t^{s + 1}} \, dt = \frac{-1}{st^s} \Biggr|_x^\infty = \frac{1}{sx^s} = \frac{1}{s} x^{-s}. \] \begin{align*} \sum_{n \le x} \frac{1}{n^s} & = \int_1^x \frac{dt}{t^s} - s \int_1^x \frac{t - [t]}{t^{s + 1}} + 1 - \frac{x - [x]}{x^s} \, dt \\ & = \frac{x^{1 - s}}{1 - s} - \frac{1}{1 - s} - s \int_1^{\infty} \frac{t - [t]}{t^{s + 1}} \, dt + 1 + O(x^{-s}). \end{align*}
Read on website | #mathematics | #number-theory | #book | #meetup
]]>\[ f(mn) = f(m) f(n) \text{ for all } m, n. \]
\[ I(n) = \begin{cases} 1 & \text{ if } n = 1, \\ 0 & \text{ if } n > 1. \end{cases} \]
\begin{align*} f(n)I(n) & = \begin{cases} 1 \cdot 1 & \text{ if } n = 1, \\ f(n) \cdot 0 & \text{ if } n > 1. \end{cases} \\ & = \begin{cases} 1 & \text{ if } n = 1, \\ 0 & \text{ if } n > 1. \end{cases} \\ & = I(n). \end{align*}
\[ \mu(1) = 1, \qquad \mu(p) = -1, \qquad \mu(p^2) = \mu(p^3) = \dots = 0. \]
\begin{align*} \sum_{d \mid p^a} \mu(d) f(d) f\left(\frac{p^a}{d}\right) & = \sum_{d = 1, p, p^2, \dots, p^a} \mu(d) f(d) f\left(\frac{p^a}{d}\right) \\ & = \begin{aligned}[t] & \mu(1) f(1) f\left( \frac{p^a}{1} \right) + \mu(p) f(p) f\left( \frac{p^a}{p} \right) \\ & + \underbrace{\mu(p^2) f(p^2) f\left( \frac{p^a}{p^2} \right) + \dots + \mu(p^a) f(p^a) f\left( \frac{p^a}{p^a} \right)}_{=\,0} \end{aligned} \\ & = \mu(1) f(1) f(p^a) + \mu(p) f(p) f(p^{a - 1}) \\ & = f(p^a) - f(p) f(p^{a - 1}). \end{align*}
\begin{align*} f(p^a) & = f(p)f(p^{a - 1}) \\ & = f(p)f(p)f(p^{a - 2}) \\ & = \dots \\ & = \underbrace{f(p)f(p)f(p) \dots f(p)}_{a \text{ times}} \\ & = \left( f(p) \right)^a. \end{align*}
\[ f(mn) = f(m)f(n) \text{ whenever } (m, n) = 1. \]
\[ f(p_1^{\alpha_1} p_2^{\alpha_2} \dots p_k^{\alpha_k}) = f(p_1^{\alpha_1}) f(p_2^{\alpha_2}) \dots f(p_k^{\alpha_k}). \]
\[ \varphi(n) = \sum_{d \mid n} \mu(d) \frac{n}{d} = \sum_{d \mid n} \mu(d) N\left(\frac{n}{d}\right) = (\mu * N)(n). \]
Let \( f \) be multiplicative. We want to show that \[ \sum_{d \mid n} \mu(d) f(d) = \prod_{p \mid n} (1 - f(p)). \] Note the following: \[ g(n) = \sum_{d \mid n} \mu(d) f(d) = \sum_{d \mid n} (\mu f) (d) u\left( \frac{n}{d} \right) = (\mu f) * u. \] The functions \( \mu \) and \( f \) are multiplicative. Thus \( \mu f \) is multiplicative. Thus \( (\mu f) * u \) is multiplicative. Therefore \[ g(n) = g(p_1^{a_1} p_2^{a_2} \dots p_k^{a_k}) = g(p_1^{a_1}) g(p_2^{a_2}) \dots g(p_k^{a_k}). \] But \begin{align*} g(p_i^{a_i}) & = \sum_{d \mid p_i^{a_i}} \mu(d) f(d) \\ & = \mu(1) f(1) + \mu(p_i) f(p_i) + \underbrace{\mu(p_i^2) f(p_i^2) + \dots + \mu(p_i^{a_i}) f(p_i^{a_i})}_{=\,0} \\ & = 1 - f(p). \end{align*} From the two equations above, we get \begin{align*} g(n) & = g(p_1^{a_1}) g(p_2^{a_2}) \dots g(p_k^{a_k}) \\ & = (1 - f(p_1)) (1 - f(p_2)) \dots (1 - f(p_k)) \\ & = \prod_{p \mid n} (1 - f(p)). \end{align*}
\begin{align*} A(x)B(x) & = \left( \sum_{n=0}^{\infty} a(n) x^n \right) \left( \sum_{n=0}^{\infty} b(n) x^n \right) \\ & = \left( a(0) + a(1)x + a(2)x^2 + \dots \right) \left( b(0) + b(1)x + b(2)x^2 + \dots \right) \\ & = a(0)b(0) + \Bigl( a(0)b(1) + a(1)b(0) \Bigr) x + \Bigl( a(0)b(2) + a(1)b(1) + a(2)b(0) \Bigr) x^2 + \dots \\ & = \sum_{k=0}^0 a(k)b(n - k) + \sum_{k=0}^1 a(k)b(1 - k)x + \sum_{k=0}^2 a(k)b(2 - k)x^2 + \dots \\ & = \sum_{n=0}^{\infty} \sum_{k=0}^n a(k)b(n - k). \end{align*}
\[ A(x)B(x) = \sum_{n=0}^{\infty} \underbrace{\left\{ \sum_{k=0}^{n} a(k) b(n - k) \right\}}_{c(n)} x^n. \] \[ B(x)A(x) = \sum_{n=0}^{\infty} \underbrace{\left\{ \sum_{k=0}^{n} a(n - k) b(k) \right\}}_{c'(n)} x^n. \] \[ c(3) = a(0)b(3) + a(1)b(2) + a(2)b(1) + a(3)b(0). \] \[ c'(3) = a(3)b(0) + a(2)b(1) + a(1)b(2) + a(0)b(3). \]
\[ A(x)\Bigl(B(x) + C(x)\Bigr) = A(x)B(x) + A(x)C(x). \] \[ \Bigl(B(x) + C(x)\Bigr)A(x) = B(x)A(x) + C(x)A(x). \] \begin{align*} A(x)\Bigl(B(x) + C(x)\Bigr) & = \left( \sum_{n=0}^{\infty} a(n) x^n \right) \left( \sum_{n=0}^{\infty} \Bigl( b(n) + c(n) \Bigr) x^n \right) \\ & = \sum_{n=0}^{\infty} \Bigl\{ \sum_{k=0}^{n} a(k) \Bigl( b(n - k) + c(n - k) \Bigr) \Bigr\} x^n. \end{align*} \[ A(x)B(x) + A(x)C(x) = \sum_{n=0}^{\infty} \sum_{k=0}^n a(k) b(n - k) x^n + \sum_{n=0}^{\infty} \sum_{k=0}^n a(k) c(n - k) x^n. \]
\begin{align*} A(x)B(x) & = \sum_{n=0}^{\infty} \Bigl( \sum_{k=0}^{n} a(k) b(n - k) \Bigr) x^n \\ & = \Bigl( a(0) b(0) \Bigr) x^0 + \Bigl( a(0) b(1) + a(1) b(0) \Bigr) x^1 + \Bigl( a(0) b(2) + a(1) b(1) + a(2) b(0) \Bigr) x^2 + \dots \\ & = 1. \end{align*}
\begin{align*} A(x) & = 1 + ax + (ax)^2 + (ax)^3 + \dots, \\ B(x) & = 1 - ax. \end{align*} \begin{align*} A(x) B(x) & = \Bigl( 1 + ax + (ax)^2 + (ax)^3 + \dots \Bigr) (1 - ax) \\ & = \Bigl( 1 + ax + (ax)^2 + (ax)^3 + \dots \Bigr) - \Bigl( (ax) - (ax)^2 - (ax)^3 - \dots \Bigr) = 1. \end{align*}
\[ f_p(x) = \sum_{n=0}^{\infty} f(p^n) x^n = f(1) + f(p) x + f(p^2) x^2 + f(p^3) x^3 + \dots \]
\begin{align*} f(n) & = f(p_1^{a_1} p_2^{a_2} \dots p_k^{a_k}) = f(p_1^{a_1}) f(p_2^{a_2}) \dots f(p_k^{a_k}), \\ \\ g(n) & = g(p_1^{a_1} p_2^{a_2} \dots p_k^{a_k}) = g(p_1^{a_1}) g(p_2^{a_2}) \dots g(p_k^{a_k}). \\ \end{align*}
\begin{align*} \mu_p(x) = \sum_{n=0}^{\infty} \mu(p^n) x^n & = \mu(1) + \mu(p) x + \mu(p^2) x^2 + \mu(p^3) x^3 + \dots \\ & = 1 - x + 0 + 0 + \dots \\ & = 1 - x. \end{align*}
\[ A(x) = \sum_{n=0}^{\infty} a(n) x^n, \quad B(x) = \sum_{n=0}^{\infty} b(n) x^n, \quad A(x) B(x) = \sum_{n=0}^{\infty} \underbrace{\sum_{k=0}^n a(k) b(n - k)}_{c(n)} x^n. \]
\[ (f * g)_p(x) = f_p(x) g_p(x). \] \[ f_p(x) = \sum_{n=0}^{\infty} f(p^n) x^n, \quad g_p(x) = \sum_{n=0}^{\infty} g(p^n) x^n, \quad f_p(x) g_p(x) = \sum_{n=0}^{\infty} \sum_{k=0}^n f(p^k) g(p^{n-k}) x^n. \] \[ h = f * g = \sum_{d \mid n} f(d) g\left( \frac{n}{d} \right). \] \[ h_p(x) = \sum_{n=0}^{\infty} h(p^n) x^n = \sum_{n=0}^{\infty} \sum_{d \mid p^n} f(d) g\left( \frac{p^n}{d} \right) x^n = \sum_{n=0}^{\infty} \sum_{k=0}^{n} f(p^k) g(p^{n-k}) x^n. \]
Some steps of Example 1: \[ I(n)= \mu^2(n) * \lambda(n) \implies I_p(x) = \mu_p^2(x) \lambda_p(x) \] \[ I_p(x) = \mu_p^2(x) \lambda_p(x) \iff 1 = \mu_p^2(x) \cdot \frac{1}{1 + x} \iff \mu_p^2(x) = 1 + x. \]
Some steps of Example 2: \begin{align*} \frac{1}{1 - p^{\alpha}} \cdot \frac{1}{1 - x} & = \frac{1}{1 - x - p^{\alpha}x + p^{\alpha}x^2} \\ & = \frac{1}{1 - (1 + p^{\alpha})x + p^{\alpha}x^2} \\ & = \frac{1}{1 - \sigma_{\alpha}(p)x + p^{\alpha}x^2}. \end{align*} Note that \( \sigma_{\alpha}(n) = \sum_{d\,\mid\,n} d^{\alpha}, \) so \[ \sigma_{\alpha}(p) = \sum_{d\,\mid\,p} d^{\alpha} = 1^{\alpha} + p^{\alpha} = 1 + p^{\alpha}. \]
Some steps of Example 3: Showing That \( f(n) = 2^{\nu(n)} \) is multiplicative: \[ f(n) = 2^{\nu(n)}. \] \[ f(p_1^{\alpha_1} p_2^{\alpha_2} \dots p_k^{\alpha_k}) = 2^{\nu(p_1^{\alpha_1} p_2^{\alpha_2} \dots p_k^{\alpha_k})} = 2^k. \] \[ f(p_1^{\alpha_1}) f(p_2^{\alpha_2}) \dots f(p_k^{\alpha_k}) = 2^{\nu(p_1^{\alpha_1})} 2^{\nu(p_2^{\alpha_2})} \dots 2^{\nu(p_k^{\alpha_k})} = \underbrace{2 \cdot 2 \cdot \dots \cdot 2}_{k \text{ times}}. = 2^k. \]
\[ (f + g)' = f' + g'. \] \[ (fg)' = f'g + fg'. \] \[ \left( f^{-1} \right)' = \frac{-f'}{f^2} = -f' \cdot (f \cdot f)^{-1}. \]
\[ f'(n) = f(n) \log n. \] \[ (f + g)' = f' + g'. \] \[ (f * g)' = f' * g + f * g'. \] \[ \left( f^{-1} \right)' = -f' * (f * f)^{-1}. \]
Read on website | #mathematics | #number-theory | #book | #meetup
]]>This page contains an archive of notes from the book Introduction to Analytic Number Theory by Tom M. Apostol (1976).
Note that this set of notes is not meant to be a systematic exposition of analytic number theory. Instead this is just a collection of examples that illustrate some of the theorems in the reference textbook and intermediate steps that are not explicitly expressed in the book. These boards were used to aid the discussions during book discussion meetings. As a result, the content of these boards is informal in nature and is not intended to be a substitute for the book or the actual discussion meetings.
If you find any mistakes in the content of the board files, please create a new issue or send a pull request.
More notes coming soon! We have all the meeting notes safely archived. Just need to format them and publish them here.
Read on website | #mathematics | #number-theory | #book | #meetup
]]>The following content on this page is an archive of the content as it appeared on the last day of meeting for this book.
Meeting time: 17:00 UTC from Tuesday to Friday, usually.^{†}
Meeting duration: 40 minutes.
Meeting link: bit.ly/spzoom2
Meeting log: 120 meetings
Reference Book: Introduction to Analytic Number Theory by Tom M. Apostol (1976)
Chapter notes: Notes
Started: 05 Mar 2021
Ended: 01 Oct 2021
† There are some exceptions to this schedule occasionally. Join our channel to receive schedule updates.
The primary reference book for these meetings is Introduction to Analytic Number Theory written by Tom M. Apostol. Admittedly, the book is quite expensive but you may find a relatively cheap paperback (softcover) copy on some websites.
These meetings are hosted by Susam and attended by some members of
#math
and #algorithms
channels of Libera
IRC network as well as by some members
from Hacker News.
You are welcome to join these meetings anytime. If you are concerned that the meetings may not make sense if you join when we are in the middle of a chapter, please free to talk to us about it in the group channel. I can recommend the next best time to begin joining the meetings. Usually, it would be when we begin reading a new section or chapter that is fairly self-contained and does not depend a lot on material we have read previously.
Read on website | #mathematics | #number-theory | #book | #meetup
]]>