What are the most mindblowing things in mathematics?
What concepts or facts do you know from math that is mind blowing, awesome, or simply fascinating?
Here are some I would like to share:
Gödel's incompleteness theorems: There are some problems in math so difficult that it can never be solved no matter how much time you put into it.
Halting problem: It is impossible to write a program that can figure out whether or not any input program loops forever or finishes running. (Undecidablity)
The Busy Beaver function
Now this is the mind blowing one. What is the largest non-infinite number you know? Graham's Number? TREE(3)? TREE(TREE(3))? This one will beat it easily.
The Busy Beaver function produces the fastest growing number that is theoretically possible. These numbers are so large we don't even know if you can compute the function to get the value even with an infinitely powerful PC.
In fact, just the mere act of being able to compute the value would mean solving the hardest problems in mathematics.
Σ(1) = 1
Σ(4) = 13
Σ(6) > 101010101010101010101010101010 (10s are stacked on each other)
Σ(17) > Graham's Number
Σ(27) If you can compute this function the Goldbach conjecture is false.
Σ(744) If you can compute this function the Riemann hypothesis is false.
An extension of that is that every time you shuffle a deck of cards there's a high probability that that particular arrangement has never been seen in the history of mankind.
With the caveat that it's not the first shuffle of a new deck. Since card decks come out of the factory in the same order, the probability that the first shuffle will result in an order that has been seen before is a little higher than on a deck that has already been shuffled.
For the uninitiated, the monty Hall problem is a good one.
Start with 3 closed doors, and an announcer who knows what's behind each. The announcer says that behind 2 of the doors is a goat, and behind the third door is a car student debt relief, but doesn't tell you which door leads to which. They then let you pick a door, and you will get what's behind the door. Before you open it, they open a different door than your choice and reveal a goat. Then the announcer says you are allowed to change your choice.
So should you switch?
The answer turns out to be yes. 2/3rds of the time you are better off switching. But even famous mathematicians didn't believe it at first.
I know the problem is easier to visualize if you increase the number of doors. Let's say you start with 1000 doors, you choose one and the announcer opens 998 other doors with goats. In this way is evident you should switch because unless you were incredibly lucky to pick up the initial door with the prize between 1000, the other door will have it.
This is so mind blowing to me, because I get what you're saying logically, but my gut still tells me it's a 50/50 chance.
But I think the reason it is true is because the other person didn't choose the other 998 doors randomly. So if you chose any of the other 998 doors, it would still be between the door you chose and the winner, other than the 1/1000 chance that you chose right at the beginning.
Let's name the goats Alice and Bob. You pick at random between Alice, Bob, and the Car, each with 1/3 chance. Let's examine each case.
Case 1: You picked Alice. Monty eliminates Bob. Switching wins. (1/3)
Case 2: You picked Bob. Monty eliminates Alice. Switching wins. (1/3)
Case 3: You picked the Car. Monty eliminates either Alice or Bob. You don't know which, but it doesn't matter-- switching loses. (1/3)
It comes down to the fact that Monty always eliminates a goat, which is why there is only one possibility in each of these (equally probable) cases.
From another point of view: Monty revealing a goat does not provide us any new information, because we know in advance that he must always do so. Hence our original odds of picking correctly (p=1/3) cannot change.
In the variant "Monty Fall" problem, where Monty opens a random door, we perform the same analysis:
Case 1: You picked Alice. (1/3)
Case 1a: Monty eliminates Bob. Switching wins. (1/2 of case 1, 1/6 overall)
Case 1b: Monty eliminates the Car. Game over. (1/2 of case 1, 1/6 overall)
Case 2: You picked Bob. (1/3)
Case 2a: Monty eliminates Alice. Switching wins. (1/2 of case 2, 1/6 overall)
Case 2b: Monty eliminates the Car. Game over. (1/2 of case 2, 1/6 overall)
Case 3: You picked the Car. (1/3)
Case 3a: Monty eliminates Alice. Switching loses. (1/2 of case 3, 1/6 overall)
Case 3b: Monty eliminates Bob. Switching loses. (1/2 of case 3, 1/6 overall)
As you can see, there is now a chance that Monty reveals the car resulting in an instant game over-- a 1/3 chance, to be exact. If Monty just so happens to reveal a goat, we instantly know that cases 1b and 2b are impossible. (In this variant, Monty revealing a goat reveals new information!) Of the remaining (still equally probable!) cases, switching wins half the time.
To me, it makes sense because there was initially 2 chances out of 3 for the prize to be in the doors you did not pick. Revealing a door, exclusively on doors you did not pick, does not reset the odds of the whole problem, it is still more likely that the prize is in one of the door you did not pick, and a door was removed from that pool.
Imo, the key element here is that your own door cannot be revealed early, or else changing your choice would not matter, so it is never "tested", and this ultimately make the other door more "vouched" for, statistically, and since you know that the door was more likely to be in the other set to begin with, well, might as well switch!
like on paper the odds on your original door was 1/3 and the option door is 1/2, but in reality with the original information both doors were 1/3 and now with the new information both doors are 1/2.
But even famous mathematicians didn't believe it at first.
They emphatically did not believe it at first. Marilyn vos Savant was flooded with about 10,000 letters after publishing the famous 1990 article, and had to write two followup articles to clarify the logic involved.
It took me a while to wrap my head around this, but here’s how I finally got it:
There are three doors and one prize, so the odds of the prize being behind any particular door are 1/3. So let’s say you choose door #1. There’s a 1/3 chance that the prize is behind door #1 and, therefore, a 2/3 chance that the prize is behind either door #2 OR door #3.
Now here’s the catch. Monty opens door #2 and reveals that it does not contain the prize. The odds are the same as before – a 1/3 chance that the prize is behind door #1, and a 2/3 chance that the prize is behind either door #2 or door #3 – but now you know definitively that the prize isn’t behind door #2, so you can rule it out. Therefore, there’s a 1/3 chance that the prize is behind door #1, and a 2/3 chance that the prize is behind door #3. So you’ll be twice as likely to win the prize if you switch your choice from door #1 to door #3.
First, fuck you! I couldn't sleep. The possibility to win the car when you change is the possibility of your first choice to be goat, which is 2/3, because you only win when your first choice is goat when you always change.
Such a simple construct right? Notice the word "conjecture". The above has been verified till 4x10^18 numbers BUT no one has been able to prove it mathematically till date! It's one of the best known unsolved problems in mathematics.
How can you prove something in math when numbers are infinite? That number you gave if it works up to there we can call it proven no? I'm not sure I understand
There are many structures of proof. A simple one might be to prove a statement is true for all cases, by simply examining each case and demonstrating it, but as you point out this won't be useful for proving statements about infinite cases.
Instead you could assume, for the sake of argument, that the statement is false, and show how this leads to a logical inconsistency, which is called proof by contradiction. For example, Georg Cantor used a proof by contradiction to demonstrate that the set of Natural Numbers (1,2,3,4...) are smaller than the set of Real Numbers (which includes the Naturals and all decimal numbers like pi and 69.6969696969...), and so there exist different "sizes" of infinity!
For a method explicitly concerned with proofs about infinite numbers of things, you can try Proof by Mathematical Induction. It's a bit tricky to describe...
First demonstrate that a statement is true in some 1st base case.
Then demonstrate that if it holds true for the abstract Nth case, then it necessarily holds true for the (N+1)th case (by doing some clever rearranging of algebra terms or something)
Therefore since it holds true for the 1th case, it must hold true for the (1+1)th case = the 2th case. And since it holds true for the 2th case it must hold true for the (2+1)=3th case. And so on ad infinitum.
Wikipedia says:
Mathematical induction can be informally illustrated by reference to the sequential effect of falling dominoes.
Bear in mind, in formal terms a "proof" is simply a list of true statements, that begin with axioms (which are true by default) and rules of inference that show how each line is derived from the line above.
As you said, we have infinite numbers so the fact that something works till 4x10^18 doesn't prove that it will work for all numbers. It will take only one counterexample to disprove this conjecture, even if it is found at 10^100. Because then we wouldn't be able to say that "all" even numbers > 2 are a sum of 2 prime numbers.
So mathematicians strive for general proofs. You start with something like: Let n be any even number > 2. Now using the known axioms of mathematics, you need to prove that for every n, there always exists two prime numbers p,q such that n=p+q.
Would recommend watching the following short and simple video on the Pythagoras theorem, it'd make it perfectly clear how proofs work in mathematics. You know the theorem right? For any right angled triangle, the square of the hypotenuse is equal to the sum of squares of both the sides. Now we can verify this for billions of different right angled triangles but it wouldn't make it a theorem. It is a theorem because we have proved it mathematically for the general case using other known axioms of mathematics.
That's a really great question. The answer is that mathematicians keep their statements general when trying to prove things. Another commenter gave a bunch of examples as to different techniques a mathematician might use, but I think giving an example of a very simple general proof might make things more clear.
Say we wanted to prove that an even number plus 1 is an odd number. This is a fact that we all intuitively know is true, but how do we know it's true? We haven't tested every single even number in existence to see that itself plus 1 is odd, so how do we know it is true for all even numbers in existence?
The answer lies in the definitions for what is an even number and what is an odd number. We say that a number is even if it can be written in the form 2n, where n is some integer, and we say that a number is odd if it can be written as 2n+1. For any number in existence, we can tell if it's even or odd by coming back to these formulas.
So let's say we have some even number. Because we know it's even, we know we can write it as 2n, where n is an integer. Adding 1 to it gives 2n+1. This is, by definition, an odd number. Because we didn't restrict at the beginning which even number we started with, we proved the fact for all even numbers, in one fell swoop.
You can take any map of anything and color it in using only four colors so that no adjacent “countries” are the same color. Often it can be done with three!
What about a hypothetical country that is shaped like a donut, and the hole is filled with four small countries? One of the countries must have the color of one of its neighbors, no?
In that image, you could color yellow into purple since it's not touching purple. Then, you could color the red inner piece to yellow, and have no red in the inner pieces.
Note you'll need the regions to be connected (or allow yourself to color things differently if they are the same 'country' but disconnected). I forget if this causes problems for any world map.
Isn't the proof of this theorem like millions of pages long or something (proof done by a computer ) ? I mean how can you even be sure that it is correct ? There might be some error somewhere.
I came here to find some cool, mind-blowing facts about math and have instead confirmed that I'm not smart enough to have my mind blown. I am familiar with some of the words used by others in this thread, but not enough of them to understand, lol.
Nonsense! I can blow both your minds without a single proof or mathematical symbol, observe!
There are different sizes of infinity.
Think of integers, or whole numbers; 1, 2, 3, 4, 5 and so on. How many are there? Infinite, you can always add one to your previous number.
Now take odd numbers; 1, 3, 5, 7, and so on. How many are there? Again, infinite because you just add 2 to the previous odd number and get a new odd number.
Both of these are infinite, but the set of numbers containing odd numbers is by definition smaller than the set of numbers containing all integers, because it doesn't have the even numbers.
There was a response I left in the main comment thread but I'm not sure if you will get the notification. I wanted to post it again so you see it
Response below
Please feel free to ask any questions! Math is a wonderful field full of beauty but unfortunately almost all education systems fail to show this and instead makes it seem like raw robotic calculations instead of creativity.
Math is best learned visually and with context to more abstract terms. 3Blue1Brown is the best resource in my opinion for this!
Here's a mindblowing fact for you along with a video from 3Blue1Brown. Imagine you are sliding a 1,000,000 kg box and slamming it into a 1 kg box on an ice surface with no friction. The 1 kg box hits a wall and bounces back to hit the 1,000,000 kg box again.
The number of bounces that appear is the digits of Pi. Crazy right? Why would pi appear here? If you want to learn more here's a video from the best math teacher in the world.
Please feel free to ask any questions! Math is a wonderful field full of beauty but unfortunately almost all education systems fail to show this and instead makes it seem like raw robotic calculations instead of creativity.
Math is best learned visually and with context to more abstract terms. 3Blue1Brown is the best resource in my opinion for this!
Here's a mindblowing fact for you along with a video from 3Blue1Brown. Imagine you are sliding a 1,000,000 kg box and slamming it into a 1 kg box on an ice surface with no friction. The 1 kg box hits a wall and bounces back to hit the 1,000,000 kg box again.
The number of bounces that appear is the digits of Pi. Crazy right? Why would pi appear here? If you want to learn more here's a video from the best math teacher in the world.
Thanks! I appreciate the response. I've seen some videos on 3blue1brown and I've really enjoyed them. I think if I were to go back and fill in all the blank spots in my math experience/education I would enjoy math quite a bit.
I don't know why it appears here or why I feel this way, but picturing the box bouncing off the wall and back, losing energy, feels intuitively round to me.
For me, personally, it's the divisible-by-three check. You know, the little shortcut you can do where you add up the individual digits of a number and if the resulting sum is divisible by three, then so is the original number.
That, to me, is black magic fuckery. Much like everything else in this thread I have no idea how it works, but unlike everything else in this thread it's actually a handy trick that I use semifrequently
That one’s actually really easy to prove numerically.
Not going to type out a full proof here, but here’s an example.
Let’s look at a two digit number for simplicity. You can write any two digit number as 10*a+b, where a and b are the first and second digits respectively.
E.g. 72 is 10 * 7 + 2.
And 10 is just 9+1, so in this case it becomes 72=(9 * 7)+7+2
We know 9 * 7 is divisible by 3 as it’s just 3 * 3 * 7.
Then if the number we add on (7 and 2) also sum to a multiple of 3, then we know the entire number is a multiple of 3.
You can then extend that to larger numbers as 100 is 99+1 and 99 is divisible by 3, and so on.
Does that hold for every base, where the divisor is 1 less than the base?
Specifically hexidecimal - could it be that 5 and 3 have the same "sum digits, get divisibility" property, since 15 (=3*5) is one less than the base number 16?
Like 2D16 is16*2+13 = 45, which is divisible by 3 and 5.
Can I make this into a party trick?! "Give me a number in hexidecimal, and I'll tell you if it's divisible by 10."
Am thinking it's 2 steps:
Does it end with a 0, 2, 4, 6, 8, A, C, E? Yes means divisible by 2.
Do the digits add up to a multiple of 5 (ok to subtract 5 liberally while adding)? Skip A and F. For B add 1; C->2, D->3, E->4. If the sum is divisible by 5, then original number is too.
So if 1 and 2 are "yes", it's divisible by 10.
E.g.
DEADBAE16 (=23349547010): (1) ends with E, ok. (2) 3+4+3+1+4=15, divisible by 5. Both are true so yes, divisible by 10.
C4744416 (=1287482010): (1) ends with 4, ok. (2) 2+4+7+4+4+4=25, ok.
BEEFFACE16 (=3203398350): (1) E, ok. (2) 1+4+4+2+4=15, ok.
Is this actually true? Have I found a new party trick for myself? How would I even know if this is correct?
If you think of complex numbers in their polar form, everything is much simpler. If you know basic calculus, it can be intuitive.
Instead of z = + iy, write z = (r, t) where r is the distance from the origin and t is the angle from the positive x-axis. Now addition is trickier to write, but multiplication is simple: (a,b) * (c,d) = (ab, b + d). That is, the lengths multiply and the angles add. Multiplication by a number (1, t) simply adds t to the angle. That is, multiplying a point by (1, t) is the same as rotating it counterclockwise about the origin by an angle t.
The function f(t) = (1, t) then parameterizes a circular motion with a constant radial velocity t. The tangential velocity of a circular motion is perpendicular to the current position, and so the derivative of our function is a constant 90 degree multiple of itself. In radians, that means f'(t) = (1, pi/2)f(t). And now we have one of the simplest differential equations whose solution can only be f(t) = k * e^(t* (1, pi/2)) = ke^(it) for some k. Given f(0) = 1, we have k = 1.
All that said, we now know that f(t) = e^(it) is a circular motion passing through f(0) = 1 with a rate of 1 radian per unit time, and e^(i pi) is halfway through a full rotation, which is -1.
If you don't know calculus, then consider the relationship between exponentiation and multiplication. We learn that when you take an interest rate of a fixed annual percent r and compound it n times a year, as you compound more and more frequently (i.e. as n gets larger and larger), the formula turns from multiplication (P(1+r/n)^(nt)) to exponentiation (Pe^(rt)). Thus, exponentiation is like a continuous series of tiny multiplications. Since, geometrically speaking, multiplying by a complex number (z, z^(2), z^(3), ...) causes us to rotate by a fixed amount each time, then complex exponentiation by a continuous real variable (z^t for t in [0,1]) causes us to rotate continuously over time. Now the precise nature of the numbers e and pi here might not be apparent, but that is the intuition behind why I say e^(it) draws a circular motion, and hopefully it's believable that e^(i pi) = -1.
All explanations will tend to have an algebraic component (the exponential and the number e arise from an algebraic relationship in a fundamental geometric equation) and a geometric component (the number pi and its relationship to circles). The previous explanations are somewhat more geometric in nature. Here is a more algebraic one.
The real-valued function e^(x) arises naturally in many contexts. It's natural to wonder if it can be extended to the complex plane, and how. To tackle this, we can fall back on a tool we often use to calculate values of smooth functions, which is the Taylor series. Knowing that the derivative of e^(x) is itself immediately tells us that e^(x) = 1 + x + x^(2)/2! + x^(3)/3! + ..., and now can simply plug in a complex value for x and see what happens (although we don't yet know if the result is even well-defined.)
Let x = iy be a purely imaginary number, where y is a real number. Then substitution gives e^x = e^(iy) = 1 + iy + i(2)y(2)/2! + i(3)y(3)/3! + ..., and of course since i^(2) = -1, this can be simplified:
So we're alternating between real/imaginary and positive/negative. Let's factor it into a real and imaginary component: e^(iy) = a + bi, where
a = 1 - y^(2)/2! + y^(4)/4! - y^(6)/6! + ...
b = y - y^(3)/3! + y^(5)/5! - y^(7)/7! + ...
And here's the kicker: from our prolific experience with calculus of the real numbers, we instantly recognize these as the Taylor series a = cos(y) and b = sin(y), and thus conclude that if anything, e^(iy) = a + bi = cos(y) + i sin(y). Finally, we have e^(i pi) = cos(pi) + i sin(pi) = -1.
The utility of Laplace transforms in regards to differential systems.
In engineering school you learn to analyze passive DC circuits early on using not much more than ohms law and Thevenin's Theoram. This shit can be taught to elementary schoolers.
Then a little while later, you learn how to do non-finear differential equations to help work complex systems, whether it's electrical, mechanical, thermal, hydrolic, etc. This shit is no walk in the park.
Then Laplace transforms/identities come along and let you turn non-linear problems in time-based space, into much simpler problems in frequency-based space. Shit blows your mind.
THEN a mafacka comes along and teaches you that these tools can be used to turn complex differential system problems (electrical, mechanical, thermal, hydrolic, etc) into simple DC circuits you can analyze/solve in frequency-based space, then convert back into time-based space for the answers.
I know this is super applied calculus shit, but I always love that sweet spot where all the high-concept math finally hits the pavement.
And then they tell you that the fundamental equations for thermal, fluid, electrical and mechanical are all basically the same when you are looking at the whole Laplace thing. It's all the same....
ABSOLUTELY. I just recently capped off the Diff Eq, Signals, and Controls courses for my undergrad, and truly by the end you feel like a wizard. It's crazy how much problem-solving/system modeling power there is in such a (relatively) simple, easy to apply, and beautifully elegant mathematical tool.
I suspect this holds true to any base x numbering where you take the highest valued digit and multiply it by any number. Try it with base 2 (1), 4 (3), 16 (F) or whatever.
Related: every time you shuffle a deck of cards you get a sequence that has never happened before. The chance of getting a sequence that has occurred is stupidly small.
Most of the time, but only as long as the shuffle is actually random. A perfect riffle shuffle on a brand new deck will get you the same result every time, and 8 perfect riffles on a row get you back to where you started.
I'm guessing this is more pronounced at lower levels. At high level chess, I often hear commentators comparing the moves to their database of games, and it often takes 20-30 moves before they declare that they have now reached a position which has never been reached in a professional game. The high level players have been grinding openings and their counters and the counters to the counters so deeply that a lot of the initial moves can be pretty common.
Also, high levels means that games are narrowing more towards the "perfect" moves, meaning that repetition from existing games are more likely.
Euler's identity, which elegantly unites some of the most fundamental constants in a single equation:
e^(iπ)+1=0
Euler's identity is often cited as an example of deep mathematical beauty. Three of the basic arithmetic operations occur exactly once each: addition, multiplication, and exponentiation. The identity also links five fundamental mathematical constants:
The number 0, the additive identity.
The number 1, the multiplicative identity.
The number π (π = 3.1415...), the fundamental circle constant.
The number e (e = 2.718...), also known as Euler's number, which occurs widely in mathematical analysis.
The number i, the imaginary unit of the complex numbers.
Furthermore, the equation is given in the form of an expression set equal to zero, which is common practice in several areas of mathematics.
Stanford University mathematics professor Keith Devlin has said, "like a Shakespearean sonnet that captures the very essence of love, or a painting that brings out the beauty of the human form that is far more than just skin deep, Euler's equation reaches down into the very depths of existence". And Paul Nahin, a professor emeritus at the University of New Hampshire, who has written a book dedicated to Euler's formula and its applications in Fourier analysis, describes Euler's identity as being "of exquisite beauty".
Mathematics writer Constance Reid has opined that Euler's identity is "the most famous formula in all mathematics". And Benjamin Peirce, a 19th-century American philosopher, mathematician, and professor at Harvard University, after proving Euler's identity during a lecture, stated that the identity "is absolutely paradoxical; we cannot understand it, and we don't know what it means, but we have proved it, and therefore we know it must be the truth".
This is the one that made me say out loud, "math is fucking weird"
I started trying to read the explanations, and it just got more and more complicated. I minored in math. But the stuff I learned seems trivial by comparison. I have a friend who is about a year away from getting his PhD in math. I don't even understand what he's saying when he talks about math.
Recall the existence and uniqueness theorem(s) for initial value problems. With this, we conclude that e^(kx) is the unique function f such that f'(x) = k f(x) and f(0) = 1. Similarly, any solution to f'' = -k^(2)f has the form f(x) = acos(kx) + bsin(kx). Now consider e^(ix). Differentiating it is the same as multiplying by i, so differentiating twice is the same as multiplying by i^(2) = -1. In other words, e^(ix) is a solution to f'' = -f. Therefore, e^(ix) = a cos(x) + b sin(x) for some a, b. Plugging in x = 0 tells us a = 1. Differentiating both sides and plugging in x = 0 again tells us b = i. So e^(ix) = cos(x) + i sin(x).
We take for granted that the basic rules of calculus work for complex numbers: the chain rule, and the derivative of the exponential function, and the existence/uniqueness theorem, and so on. But these are all proved in much the same way as for real numbers, there's nothing special behind the scenes.
"The coastline paradox is the counterintuitive observation that the coastline of a landmass does not have a well-defined length. This results from the fractal curve–like properties of coastlines; i.e., the fact that a coastline typically has a fractal dimension."
This is my silly contribution: 70% of 30 is equal to 30% of 70. This applies to other numbers and can be really helpful when doing percentages in your head. 15% of 77 is equal to 77% of 15.
I am a huge fan of 3blue1brown and his videos are just amazing. My favorite is linear algebra. It was like an out of body experience. All of a sudden the world made so much more sense.
Every prime larger than 3 is either of form 6k+1, or 6k+5; the other four possibilities are either divisible by 2 or by 3 (or by both). Now (6k+1)² − 1 = 6k(6k+2) = 12k(3k+1) and at least one of k and 3k+1 must be even. Also (6k+5)² − 1 = (6k+4)(6k+6) = 12(3k+2)(k+1) and at least one of 3k+2 and k+1 must be even.
Imagine a soccer ball. The most traditional design consists of white hexagons and black pentagons. If you count them, you will find that there are 12 pentagons and 20 hexagons.
Now imagine you tried to cover the entire Earth in the same way, using similar size hexagons and pentagons (hopefully the rules are intuitive). How many pentagons would be there? Intuitively, you would think that the number of both shapes would be similar, just like on the soccer ball. So, there would be a lot of hexagons and a lot of pentagons. But actually, along with many hexagons, you would still have exactly 12 pentagons, not one less, not one more. This comes from the Euler's formula, and there is a nice sketch of the proof here: .
Borsuk-Ulam is a great one! In essense it says that flattening a sphere into a disk will always make two antipodal points meet. This holds in arbitrary dimensions and leads to statements such as "there are two points along the equator on opposite sides of the earth with the same temperature". Similarly one knows that there are two points on the opposite sides (antipodal) of the earth that both have the same temperature and pressure.
Also honorable mentions to the hairy ball theorem for giving us the much needed information that there is always a point on the earth where the wind is not blowing.
Seeing I was a bit heavy on the meteorological applications, as a corollary of Borsuk-Ulam there is also the ham sandwich theorem for the aspiring hobby chefs.
Godel's incompleteness theorem is actually even more subtle and mind-blowing than how you describe it. It states that in any mathematical system, there are truths in that system that cannot be proven using just the mathematical rules of that system. It requires adding additional rules to that system to prove those truths. And when you do that, there are new things that are true that cannot be proven using the expanded rules of that mathematical system.
Incompleteness doesn't come as a huge surprise when your learn math in an axiomatic way rather than computationally. For me the treacherous part is actually knowing whether something is unprovable because of incompleteness or because no one has found a proof yet.
This is a common one, but the cardinality of infinite sets. Some infinities are larger than others.
The natural numbers are countably infinite, and any set that has a one-to-one mapping to the natural numbers is also countably infinite. So that means the set of all even natural numbers is the same size as the natural numbers, because we can map 0 > 0, 1 > 2, 2 > 4, 3 > 6, etc.
But that suggests we can also map a set that seems larger than the natural numbers to the natural numbers, such as the integers: 0 → 0, 1 → 1, 2 → –1, 3 → 2, 4 → –2, etc. In fact, we can even map pairs of integers to natural numbers, and because rational numbers can be represented in terms of pairs of numbers, their cardinality is that of the natural numbers. Even though the cardinality of the rationals is identical to that of the integers, the rationals are still dense, which means that between any two rational numbers we can find another one. The integers do not have this property.
But if we try to do this with real numbers, even a limited subset such as the real numbers between 0 and 1, it is impossible to perform this mapping. If you attempted to enumerate all of the real numbers between 0 and 1 as infinitely long decimals, you could always construct a number that was not present in the original enumeration by going through each element in order and appending a digit that did not match a decimal digit in the referenced element. This is Cantor's diagonal argument, which implies that the cardinality of the real numbers is strictly greater than that of the rationals.
The best part of this is that it is possible to construct a set that has the same cardinality as the real numbers but is not dense, such as the Cantor set.
The best part of this is that it is possible to construct a set that has the same cardinality as the real numbers but is not dense, such as the Cantor set.
Well that's not as hard as it sounds, [0,1] isn't dense in the reals either. It is however dense with respect to itself, in the sense that the closure of [0,1] in the reals is [0,1]. The Cantor set has the special property of being nowhere dense, which is to say that it contains no intervals (taking for granted that it is closed). It's like a bunch of disjointed, sparse dots that has no length or substance, yet there are uncountably many points.
That you can have 5 apples, divide them zero times, and somehow end up with math shitting itself inside-out at you even though the apples are still just sitting there.
Except that by dividing the total number zero times means you're not dividing them at all, and therefore by doing nothing you are still left with 5 apples.
Three of the basic arithmetic operations occur exactly once each: addition, multiplication, and exponentiation. The identity also links five fundamental mathematical constants:[6]
The number 0, the additive identity.
The number 1, the multiplicative identity.
The number π (π = 3.1415...), the fundamental circle constant.
The number e (e = 2.718...), also known as Euler's number, which occurs widely in mathematical analysis.
The number i, the imaginary unit of the complex numbers.
The fact that an equation like that exists at the heart of maths - feels almost like it was left there deliberately.
A simple one: Let's say you want to sum the numbers from 1 to 100. You could make the sum normally (1+2+3...) or you can rearrange the numbers in pairs: 1+100, 2+99, 3+98.... until 50+51 (50 pairs). So you will have 50 pairs and all of them sum 101 -> 101*50= 5050. There's a story who says that this method was discovered by Gauss when he was still a child in elementary school and their teacher asked their students to sum the numbers.
x^n + y^n = z^n has no solutions where n > 2 and x, y and z are all natural numbers. It's hard to believe that, knowing that it has an infinite number of solutions where n = 2.
Pierre de Format, after whom this theorem was named, famously claimed to have had a proof by leaving the following remark in some book that he owned: "I have a proof of this theorem, but there is not enough space in this margin". It took mathematicians several hundred years to actually find the proof.
I find the logistic map to be fascinating. The logistic map is a simple mathematical equation that surprisingly appears everywhere in nature and social systems. It is a great representation of how complex behavior can emerge from a straightforward rule. Imagine a population of creatures with limited resources that reproduce and compete for those resources. The logistic map describes how the population size changes over time as a function of its current size, and it reveals fascinating patterns. When the population is small, it grows rapidly due to ample resources. However, as it approaches a critical point, the growth slows, and competition intensifies, leading to an eventual stable population. This concept echoes in various real-world scenarios, from describing the spread of epidemics to predicting traffic jams and even modeling economic behaviors. It's used by computers to generate random numbers, because a computer can't actually generate truly random numbers. Veritasium did a good video on it: https://www.youtube.com/watch?v=ovJcsL7vyrk
I find it fascinating how it permeates nature in so many places. It's a universal constant, but one we can't easily observe.
When all you learn about Pythagoras is that he was a mathematician known for triangles then learning about the cult and the murder definitely takes you by surprise
Here's a fun one - you know the concept of regular polyhedra/platonic solids right? 3d shapes where every edge, angle, and face is the same? How many of them are there?
Did you guess 48?
There's way more regular solids out there than the bog standard set of DnD dice! Some of them are easy to understand, like the Kepler-poisont solids which basically use a pentagramme in various orientations for the face shape (hey the rules don't say the edges can't intersect!) To uh...This thing. And more! This video is a fun breakdown (both mathematically and mentally) of all of them.
Unfortunately they only add like 4 new potential dice to your collection and all of them are very painful.
The Banach - Tarski Theorm is up there. Basically, a solid ball can be broken down into infinitely many points and rotated in such a way that that a copy of the original ball is produced. Duplication is mathematically sound! But physically impossible.
How Gauss was able to solve 1+2+3...+99+100 in the span of minutes. It really shows you can solve math problems by thinking in different ways and approaches.
To me, personally, it has to be bezier curves. They're not one of those things that only real mathematicians can understand, and that's exactly why I'm fascinated by them. You don't need to understand the equations happening to make use of them, since they make a lot of sense visually. The cherry on top is their real world usefulness in computer graphics.
The Julia and Mandelbrot sets always get me. That such a complex structure could arise from such simple rules. Here's a brilliant explanation I found years back: https://www.karlsims.com/julia.html
Glad you are enjoying it! Please feel free to share any other channels like Computerphile, Numberphile, Mutual Information, & Veritasium. It's difficult to find gems nowadays on YT.
Well, if you ever want to find some more mathtubers, just browse the #SoMe2 or #SoMe3 hashtags on YouTube. It led me to Morphocular, who does great videos with great visual quality similar to 3B1B. Another great channel is Another Roof with a completely different approach of videomaking. VSauce also has some good videos on math topics, if you like him.
I also came across this channel by Freya Holmér recently that does more Long Lecture videos on math for video game programmers. I found the the one on splines to be quite enlightening (as I had recently been doing a lot of spline stuff and struggling with third party packages).
I can't claim it's as high quality as the channels you've mentioned, but I actually have a channel! I only have one video at the moment, because they take a long time to make, but I'm planning on having the next one out perhaps within the next month.
I'm not sure Earth would be a correct analogy for spherical geometry. Correct me if I'm wrong, but spherical geometry is when the actual space curvature is a sphere, which is different from just living on a sphere.
The 196,883-dimensional monster number (808,017,424,794,512,875,886,459,904,961,710,757,005,754,368,000,000,000 ≈ 8×10^53) is fascinating and mind-boggling. It's about symmetry groups.
The fact that complex numbers allow you to get a much more accurate approximation of the derivative than classical finite difference at almost no extra cost under suitable conditions while also suffering way less from roundoff errors when implemented in finite precision:
(x and epsilon are real numbers and f is assumed to be an analytic extension of some real function)
This is the first one that's new to me! Really interesting, I guess the big problem is that you need to actually be able to evaluate the analytic extension.
The fact that complex numbers allow you to get a much more accurate approximation of the derivative than classical finite difference at almost no extra cost under suitable conditions while also suffering way less from roundoff errors when implemented in finite precision:
What?
The formula you linked is wrong, it should be O(epsilon). It's the same as for real numbers, f(x+h) = f(x) + hf'(x) + O(h^(2)). If we assume f(x) is real for real x, then taking imaginary parts, im(f(x+ih)) = 0 + im(ihf'(x)) + O(h^(2)) = hf'(x)) + O(h^(2)).
I can assure you that you're the one who's wrong, the order 2 term is a real number which means that it goes away when you take the imaginary part, leaving you with a O(epsilon^3) which then becomes a O(epsilon^2) after dividing by epsilon. This is called complex-step differentiation.
Incompleteness is great.. internal consistency is incompatible with universality.. goes hand in hand with Relativity.. they both are trying to lift us toward higher dimensional understanding..
I found the easiest way to think about it as if there are 10 doors, you choose 1, then 8 other doors are opened. Do you stay with your first choice, or the other remaining door? Or scale up to 100. Then you really see the advantage of swapping doors. You have a higher probability when choosing the last remaining door than of having correctly choosen the correct door the first time.
Edit:
More generically, it's set theory, where the initial set of doors is 1 and (n-1). In the end you are shown n-2 doors out of the second set, but the probability of having selected the correct door initially is 1/n. You can think of it as switching your choice to all of the initial (n-1) doors for a probability of (n-1)/n.
I find the easiest way to understand Monty Hall is to think of it in a meta way:
Situation A - A person picks one of three doors, 1 n 3 chance of success.
Situation B - A person picks one of two doors, 1 in 2 chance of success.
If you were an observer of these two situations (not the person choosing doors) and you were gonna bet on which situation will more often succeed, clearly the second choice.
Saving this thread! I love math, even if I'm not great at it.
Something I learned recently is that there are as many real numbers between 0 and 1 as there are between 0 and 2, because you can always match a number from between 0 and 1 with a number between 0 and 2. Someone please correct me if I mixed this up somehow.
You are correct. This notion of “size” of sets is called “cardinality”. For two sets to have the same “size” is to have the same cardinality.
The set of natural numbers (whole, counting numbers, starting from either 0 or 1, depending on which field you’re in) and the integers have the same cardinality. They also have the same cardinality as the rational numbers, numbers that can be written as a fraction of integers. However, none of these have the same cardinality as the reals, and the way to prove that is through Cantor’s well-known Diagonal Argument.
Another interesting thing that makes integers and rationals different, despite them having the same cardinality, is that the rationals are “dense” in the reals. What “rationals are dense in the reals” means is that if you take any two real numbers, you can always find a rational number between them. This is, however, not true for integers. Pretty fascinating, since this shows that the intuitive notion of “relative size” actually captures the idea of, in this case, distance, aka a metric. Cardinality is thus defined to remove that notion.
Integrals. I can have an area function, integrate it, and then have a volume.
And if you look at it from the Rieman sum angle, you are pretty much adding up an infinite amount of tiny volumes (the area * width of slice) to get the full volume.
The infinite sum of all the natural numbers 1+2+3+... is a divergent series. But it can also be shown to be equivalent to -1/12. This result is actually used in quantum field theory.
It can't be shown to be equivalent to -1/12. The sum definitely just simply goes to infinity. However, if you use some specific nonstandard definitions, you can squeeze out -1/12.
What I think is interesting is how many choices of nonstandard definitions you can use to "prove" this result. I can recall 3 just right off the top of my head. However, as these are nonstandard definitions, one can't really say that the sum is -1/12 without specifying which logical system you are operating in, because the default system makes it simply untrue.
It's like saying that 2+2=0. Sure, you can define the + sign to be some nonstandard function, but unless I describe that function to you, I can't just simply tell you 2+2=0, because you'd just assume the standard definition of +, in which 2+2 definitely isn't 0.
Szemeredis regularity lemma is really cool. Basically if you desire a certain structure in your graph, you just have to make it really really (really) big and then you're sure to find it. Or in other words you can find a really regular graph up to any positive error percentage as long as you make it really really (really really) big.
Let's define a sequence. We will start with 1 and 1.
To get the next number, square the last, add 1, and divide by the second to last.
a(n+1) = ( a(n)^2 +1 )/ a(n-1)
So the fourth number is (2*2+1)/1 =5, while the next is (25+1)/2 = 13. The sequence is thus:
1, 1, 2, 5, 13, 34, ...
If you keep computing (the numbers get large) you'll see that every time we get an integer. But every step involves a division! Usually dividing things gives fractions.
This last is called the somos sequence, and it shows up in fairly deep algebra.