Jump to content

Banner.jpg.b89429c566825f6ab32bcafbada449c9.jpg

This is seriously cool stuff


Recommended Posts

On surface - it is just some nice mathematics, but the fact that you get finite answers when obeying symmetries just blows my mind (not pretending to fully understand how it works - I just get that it works).

 

  • Like 4
  • Thanks 1
Link to comment
Share on other sites

That is fascinating, @vlaiv.
I'd come across the -1/12 business previously, but only as an isolated oddity, a cautionary tale against careless manipulation of infinite series that don't converge absolutely.  I was also vaguely aware of the renormalization practice for removing infinities in QFT, and whlie the maths was beyond me, it did seemed to be a bit of an ugly cludge. Bringing the two things together like this is very intriuging.  One of the onward Youtube links is to another video that does discuss (unlike this one) the connection with the zeta function and its analytic continuation, and the spooky ζ (-1) = -1/12. 
It would be amazing if this new paper does prove to be onto something fundamental. It would be another example of deep connections between pure maths and fundamental physics.

  • Like 1
Link to comment
Share on other sites

Look up "Grandi's Series" on Wikipedia and follow the "See Also" links.

Lots of good fun there. In particular, 1+2+3+4+.. = -1/12.

With a little ingenuity it is possible to make a divergent series sum to any value you wish.

  • Like 1
Link to comment
Share on other sites

12 minutes ago, Xilman said:

Lots of good fun there. In particular, 1+2+3+4+.. = -1/12.

I like how they use rigorous mathematical framework to actually derive this identity in the video.

Standard procedure for divergent series, or in fact any series is to define a partial sum up to certain N and then find a limit as N tends to infinity. We can see this as weighted summing where weights are 1, 1, 1, 1, ... (all the way up to Nth position) and then 0, 0, 0, 0,  .... for the rest

Now, if I got this right from the video, it was Terence Tao who showed some years ago that any distribution that starts of as 1 and then transitions to 0 around N can be used all the same as weights.

When we examine different distributions we get that limit of the sum is in form of c * N^2 - 1/12 + .... some other stuff that we ditch as N goes to infinity. Some of distributions set c to 0 thus leaving us with 0 * N^2 -1/12 = -1/12 no matter how big N gets (it is still -1/12 in the limit as anything multiplied with 0 is still 0).

And the final punch line is that symmetry dictates for which distributions c = 0 (think symmetry of physics - Noether's theorem rather than symmetry of distribution). This is why we can use this framework to normalize infinite sums in QFT (or at least there is general impression that two are deeply linked somehow - thus calculations work as they should although there are "infinites" involved).

  • Like 1
Link to comment
Share on other sites

The value of -1/12 is just the value where a graph of the results of the partial summations of 1+2+3+4... if extended backwards would cross the y axis. From Wikipedia

Sum1234Summary_svg.png.403bb8522a3019bef2433615afc77605.png

Similarly the series 1+1+1+1... when plotted intercepts the y axis at -1/2

Sum1111Asymptote_svg.png.e3ca7a999231e0c7990d9a11a9d6da59.png

To say that if the graph is extended to infinity the result is the same as if was extended to zero in the other direction is obviously wrong, but if you manipulate the results of the partial summations by using a smooth cutoff function, and therefore get the result of every partial summation incorrect, and a lot less than it would be (as shown on Numberfile's brown paper writings) than if you had used an actual real hard cutoff, you can create any result you wish.

Choosing suitable weighting functions you can achieve the result which is the same as when x = 0 on the graph. Interesting maybe, but not meaningful as a result of the series.

Why is a hard cutoff for each partial sum bad, when the each sum start is a hard 1 and every increment is a hard jump in value, (unless it's like in the video where the last increment result can be any value you want. 😁

Why do all these series' mentioned start from 1 and not 0. Using 0 should be perfectly valid but it would spoil the results so it's omitted but an implied 0 result is used as the answer. 🤔

Just my two penneth. 😃

Alan

  • Like 1
Link to comment
Share on other sites

2 minutes ago, symmetal said:

Why is a hard cutoff for each partial sum bad, when the each sum start is a hard 1 and every increment is a hard jump in value, (unless it's like in the video where the last increment result can be any value you want. 😁

It's not bad - it is just one of weighing schemes that indeed produces an infinity.

Many different weighing schemes also produce infinity, but there are some that produce -1/12.

I also think that you can't have arbitrary weighing scheme - only one that satisfies certain criteria. I'm not sure what that criteria is, but as far as I can tell - Terence Tao did research into that and probably proved that certain class of weighing schemes are equivalent.

Perhaps one criteria is that weighing function needs to tend to 0 as one moves to the right of N at greater speed than the speed of N approaching infinity and similarly that weighing function needs to tend to 1 as one moves to the left from N (again going faster than N goes to infinity) - or some other requirement like that.

My firm belief is that sum 1+2+3+4+5+.... of to infinity is one way of calculating certain value - but flawed way of doing that as it does not converge in classical sense. Which does not mean that actual number does not exist. Another way of calculating the same value would be by using different weighing function - and some of those weighing functions are not flawed and allow you to calculate the value in that way.

Zeta function is yet another way to calculate the same value (which again works).

Point is not in the infinite sum being equal to -1/12, but rather it is that we have some value that is -1/12 and that there are different ways to calculate it - one of which fails but we know why it fails and when we encounter this way of calculating - we know what the answer should be - regardless if we can't actually pull off that particular calculation.

This is why it works in physics - we know that answer is right - it is just "algorithm" to calculate it that it is flawed, and above paper gives us better insight into why it's flawed and what are correct ways to calculate such values that we can use when we stumble onto a flawed way of calculating them.

 

Link to comment
Share on other sites

10 minutes ago, vlaiv said:

Perhaps one criteria is that weighing function needs to tend to 0 as one moves to the right of N at greater speed than the speed of N approaching infinity and similarly that weighing function needs to tend to 1 as one moves to the left from N (again going faster than N goes to infinity) - or some other requirement like that.

But this means that at N the answer is always less than the value that would be obtained with a weighting function of 1, depending on the weighting function, and it's all these values of N which are being summed, so you're not summing the series but summing the weighting function result.  The higher the value of N, the closer the weighting function result gets to zero and therefore the weighting function results graph approaches a minimum fixed value equal to where the 'real' plotted summation result crosses the y axis. Why bother doing the summation with fancy weighting functions when plotting the first few partial summation results on a graph gives the same answer right away.

It's an answer alright, but to say 1+2+3+4... = -1/12 is just a theoretical mathimatical convenience to avoid dealing with infinity, rather than a realistic meaningful answer.

Alan

Link to comment
Share on other sites

While not understanding the Maths, my thoughts would be that it is misleading to state 1+2+3+4... = -1/12. It's actually a different function we are summing which is arguably a poor approximation to the original. 

That being said, if it can be used as a tool to remove infinities from QFT, that's really clever, and a really interesting bit of research. Pure Maths at it's best!

It's a great video and Tony Padilla is fantastic. His enthusiasm is infectious. I wish I had had lecturers like him!

Malcolm 

Link to comment
Share on other sites

12 hours ago, symmetal said:

But this means that at N the answer is always less than the value that would be obtained with a weighting function of 1,

Why?

If we look at weighing function up to N - sure, it will produce smaller result then all ones up to N, but if we include weighing coefficients above N which are greater than zero - why do you think that sums will be different in the limit where N tends to infinity?

12 hours ago, symmetal said:

so you're not summing the series but summing the weighting function result.

But when you are summing the series - you are doing the same, you are summing weighted numbers, except in this case weights are 1,1,1,1,1,1, .... ,0,0,0,0,0, ...

12 hours ago, symmetal said:

The higher the value of N, the closer the weighting function result gets to zero and therefore the weighting function results graph approaches a minimum fixed value equal to where the 'real' plotted summation result crosses the y axis. Why bother doing the summation with fancy weighting functions when plotting the first few partial summation results on a graph gives the same answer right away.

I'm not sure that you are understanding weighing function properly.

Let me show you with a graph

image.png.d7dfb51872d96453721d569ee4c35fde.png

Green is poorly drawn and it should look like this:

image.png.60caabb53a8f66d3a8fbcf29aba09038.png

There should be smooth transition at each ever higher N.

For the most part - two weighing functions are roughly the same - only around N there is difference in how they transition the actual value of N - either discontinuous or smooth (which makes it analytical function).

Both of above weighing functions would produce similar looking graphs when you plot calculated values against N like you suggested:

Sum1234Summary_svg.png.403bb8522a3019bef

Only difference is that when you apply rigorous mathematical framework of limits to some smooth weighing functions - you get converging result and for others result remains diverging (like in case of step function).

Maybe it is best to think of it this way:

let's solve

X^2 = -4

If you try the "brute force" approach of finding square root of -4 - you might end up in infinite calculation without end (try any iterative method designed for positive numbers in order to calculate X).

But if we write above expression a little bit differently - like this:

X^2 = 4 * i^2

Then it is trivial to calculate that X = 2i - even if we use iterative method to calculate square of 4 - which will work in this case.

You might say - but you used a trick! Sure, but it is valid, well defined mathematical trick that is consistent with the rest of the mathematics, so not much trickier than say checking if number is divisible by two by examining the last digit.

 

Link to comment
Share on other sites

Ding ding!! Round 3: 😁

22 hours ago, vlaiv said:

Point is not in the infinite sum being equal to -1/12, but rather it is that we have some value that is -1/12 and that there are different ways to calculate it - one of which fails but we know why it fails and when we encounter this way of calculating - we know what the answer should be - regardless if we can't actually pull off that particular calculation.

But it's explicitly stated in the video that the infinite sum is equal to -1/12. All I see is the value of -1/12 is the same as a projected value that would be obtained if N = 0, and not if N = ∞. But 0 is outside the range of the series. That's my main concern. Removing ∞ from the result by using a 'suitable' smooth transition weighting function, has also removed the results of the higher partial sums completely too, not just at infinity, and you're left with the residual -1/12. This doesn't seem anything to be surprised at really, and is actually not relevant to the sum of the series in the real world.

Below he has written the Sigma sum from 0 to ∞ when shouldn't it be the sum from 1 to ∞ 🤔 If N=0 then the partial sum is 0 and not -1/12 which ruins everything.

Also the hard transition he is indicating at N has a rounded top. Surely it would be a sharp cutoff as I've shown in red.

Untitled-2.thumb.png.00219bb8961480e39a6d2c84ab4ab915.png

The symbol C below is the smooth weighting function e^(-n/N) when N is large. For the result to be -1/12, C must equate to 0 which it doesn't here as the result still goes to ∞. He quickly dismisses this and uses another more complicated weighting function using a cosine, which he admits he reverse engineered to produce the result of -1/12 

Untitled-3.thumb.png.11695fcbb1dfba6a431b8897ed32ef9d.png

This is the graph of the cosine weighting function which, surprise surprise, produces the result he wants, by converting an actual divergent function into a fabricated convergent function and as it produces a fixed result, can amazingly be used as a substitute for the divergent function. At as low a value as N=4 the partial sum results have been virtually elimated so of course ∞ has been removed from the result.

Untitled-1.png.e8f9a8d76d85451acac63af19948dd99.png

The series sum should be expressed as 

(Weighting function which gives the result we want) * (sum of 1+2+3+4.....+∞) = result we want. ***round of applause***

This doesn't look as clickbaity as 1+2+3+4.....+∞ = -1/12 so wouldn't get so many views. 😁

Alan

  • Like 1
Link to comment
Share on other sites

I looked at the link vlaiv but got lost fairly quickly when he started calculating Riemann zeta functions where Re(s) <= 1 and where the results are divergent. If you apply analytic continuation 🤔 to the zeta functions > 1 into the realm where they are <= 1 and throw some Bernoulli numbers into the mix, out pop these fixed numbers, of which -1/12 is just one of many. If you apply the positive zeta function to these fixed numbers you end up with series like these

1+1+1+1.... = -1/2

1+2+3+4... = -1/12

1+4+9+16.... = 0

From the article:

Quote

Clearly, these formulae do not make sense if one stays within the traditional way to evaluate infinite series, and so it seems that one is forced to use the somewhat unintuitive analytic continuation interpretation of such sums to make these formulae rigorous. But as it stands, the formulae look “wrong” for several reasons. Most obviously, the summands on the left are all positive, but the right-hand sides can be zero or negative. A little more subtly, the identities do not appear to be consistent with each other.

I found the YouTube video misleading and with some sloppy errors, as I mentioned above. 0 or -1/2 are just as 'important' as -1/12, along with many others, in 'saving us from infinity'  and the article and video ignores the simple fact that the results are just the value of the series' if extended backwards from 1 to zero which is plainly evident if you plot the series, (in the traditional way 🙂), and then discard the whole series, including infinity, leaving you with just the value at zero. That's what I find the most annoying. 😟 This doesn't need analytic continuation, Bernoulli numbers, or carefully manufactured weighting functions to get the same result.

"This reminds me of string theory and elements of quantum field theory" in the video was the icing on the cake to give it some more 'wow' factor.

I used to enjoy Numberfile when they dealt with real world number facts to create amazing real results, but will be wary from now on, especially with this presenter.

Thanks vlaiv for getting me annoyed. 😁

Alan

Edited by symmetal
  • Like 1
  • Haha 1
Link to comment
Share on other sites

I think a lot of the maths (and physiscs) reporting on Youtube  is a bit sensational and  I can see why: it's too hard to capture many peoples attention doing elaborate computations with series and integrals, so throw them a crazy looking identity to provoke interest. There are lots of interesting (and rigorous) ways to derive the value of the zeta function at -1  with a bit of analysis. Already Euler had a almost complete argument for the closely related eta function. The maths of the video is rigorous, the presentation is in places not.

Edited by Nik271
Link to comment
Share on other sites

We might be focusing on bits that are less important.

What do you say about:

- paper coming out about selection of regularization weighting function

- fact that choice of weighting function that produces finite value is related to the symmetries in physics

Link to comment
Share on other sites

I think in physics there are lots of quantities (particularly in quantum mechanics) that potentially sum to infinity. However in any meaningful physical process it is the difference in quantities that matters so these infinities should "cancel" out. A lot of modern research is working out how to do this in a mathematical rigorous way.

  • Like 3
Link to comment
Share on other sites

To me, it seems like good way for different way of expressing our theories, and a path to get there.

If we have a theory which necessarily involves summing to infinity and we need to use clever regularization tricks to avoid that - that just sounds like we are using wrong way of calculating something that in fact does not include infinity. By examining how to best regularize things - I'm guessing will will get more insight how to differently formalize our theory to avoid infinite sums in the first place.

Link to comment
Share on other sites

1 minute ago, iantaylor2uk said:

Actually a much better video on the same topic is the one below:

Can't tell if that is a joke or you posted wrong link by mistake :D

 

Link to comment
Share on other sites

10 minutes ago, vlaiv said:

Can't tell if that is a joke or you posted wrong link by mistake :D

 

sorry - i was trying to post the correct link using my phone and it somehow got the wrong one - it has now been corrected

Link to comment
Share on other sites

8 hours ago, vlaiv said:

Sorry, did not mean to do that.

It's ok vlaiv I was only joking. 😊

8 hours ago, vlaiv said:

We might be focusing on bits that are less important.

What do you say about:

- paper coming out about selection of regularization weighting function

- fact that choice of weighting function that produces finite value is related to the symmetries in physics

I can't really comment on that myself as it's getting beyond my knowledge level on the subject. I only watched the video due to the clickbait title, in which case the video succeeded in what it wanted to achieve. I see someone has posted the video in the comments to the Terence Tao page you linked to. I don't think Mr. Tao would appreciate that. 🙂

8 hours ago, iantaylor2uk said:

Actually a much better video on the same topic

Thanks Ian for that link. I'll watch it fully later on, but his comments at the start saying that the Numberfile video is wrong on every level made me smile a little. 🙂

Alan

Link to comment
Share on other sites

21 minutes ago, symmetal said:

Thanks Ian for that link. I'll watch it fully later on, but his comments at the start saying that the Numberfile video is wrong on every level made me smile a little. 🙂

FYI there are two Numberphile videos on the subject. One from 2014 and one from 2024. Video you are talking about is referring to the first Numberphile video, it's not addressing the latest one.

  • Thanks 1
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.