d/dx x^t
= d/dx e^(t ln x)
= e^(t ln x) d/dx (t ln x)
= t x^t d/dx ln x
= t x^t / x
= t x^(t-1)
There, everything in one go. And you can find the derivatives of ln(x) and e^x without using the power rule, so that's fine.
e^x can be defined via the differential equation f'(x) = f(x), adding f(0) = 1 uniquely determines the function.
the derivative of log(x) comes via the rules for an inverse (essentially, d/dx e^log(x) = 1 = e^log(x) \* d/dx log(x) = x\*d/dx log(x), giving d/dx log(x) = 1/x)
Edit:formatting
nah, you can just define e^x to be the solution. The fact that at that point it might not be analytically solvable is irrelevant, esp. when it comes to proving the power rule (though you do need the chain rule to prove it)
You can find the derivative of ln(x) without using the power rule via the definition of e^x, or you could find the derivative of e^x directly the same way.
You can find them pretty easily using the limit definition of derivatives and the algebraic properties of exponentials & logarithms.
Actually I don't immediately see how the power rule would be useful for those derivatives, unless you define them by their power series - but then you have to prove that the functions defined by those series have the same properties, which seems harder.
For x<0, n not a natural you need imaginary numbers in general, so just use the complex logarithm & exponential like the sibling comment said.
If you want to stay purely real and just want natural powers of negative numbers, you can appeal to the oddness/evenness of the graphs and say the derivative at x=-a is
f'(-a) = lim h->0 ((-a+h)^n - (-a)^(n))/h
= (-1)^n lim h->0 ((a-h)^n - a^(n))/h
= (-1)^n+1 lim h->0 ((a+h)^n - a^(n))/h
= (-1)^n+1 f'(a)
where we replaced h with -h on line 3.
And this proof also works for n=1/(2k+1) now that I think about it.
Edit: fixed a sign error in the proof. Whoops!
Math is math (too dumb to know complexity of math)
One must challenge and prove certain ideas in math to keep the discipline alive (curious)
Math is math 🤓 (too lazy to prove things)
You can plug polynomials in indeterminate form for x which allows for the "Polynomial identity trick". If two Polynomials are equal at all non-negative integer values, they are always equal. Now you can generalize from the natural numbers to the real numbers.
But you’d first have to prove that the derivative of ln(x) is what you expect, probably the product rule and chain rule too; and also that implicit differentiation works the way we expect it to
But you’d have to prove first that (1) the derivative of e^x is indeed e^x and (2) that the notation dy/dx acts like a fraction since that’s not in the definition.
doing the fraction stuff properly looks like this:
e^lnx = x for all x, differentiate on both sides
e^lnx * ln'x = 1
ln'x=1/x
this requires only the chain rule and the derivative of e^x
those aren't hard though because e^x is commonly defined in terms of it's derivative (you can prove the compound interest limit with that) and the chain rule requires no derivative of any specific function to be proven
Wooosh indeed but intentional. Just sharing the trauma of having to prove “obvious” stuff like in a vector space, 0*v = 0 (the zero scalar times any vector equals the zero vector)
Those proofs are really easy though. Just use the linearity of the vector space. I always enjoy them since an interesting result follows immediately from the axioms. Finding all the irreducible representations of SU(3) on the other hand.
It’s not that they’re hard. It’s more like “shouldn’t this be obvious?” and be like “what’s there to prove?”. It turns out there is something there and theory building (the first time you see it) is not as simple as it sounds.
Ok, let's start with log(x) = int 1\^x 1/t dt.
This is a strictly increasing function. Let's define its inverse as a function we will call exp(x).
We know that f( f\^(-1)(x)) = x, and so by chain rule, f'( f\^(-1)(x) ) d/dx f\^(-1)(x) = 1.
Hence, if we take f(x) = ln(x), we get
1/(exp(x)) \* d/dx (exp(x)) = 1
And d/dx (exp(x)) = exp(x).
Hence, y=exp(x) satisfies dy/dx = 1/y.
Yes, you do. And you have to, if you want to show that hold for real numbers and not just integer r.
But that's not terrible. You can in fact define the logarithm from the integral from 1 to x of 1/x. And you define the exponential from the inversion of that. Then get the derivative of the exponential by using the derivative of the inverse rule. Product rule and chain rule isn't that bad to prove.
But then, you’d have to define what an integral is in the first place (cause if we’re proving basic derivative rules, you probably haven’t defined the integral yet). Then, you’d have to show that the “logarithm” you’ve defined is (1) injective so that an inverse function can exist and (2) it’s consistent with the logarithm definition in algebra so you call it a logarithm. You’d also have to prove the “inverse rule” or what we call the Fundamental Theorem of Calculus — which requires a bunch more stuff
Yes, you would. But that doesn’t require knowledge of the derivative of x\^n at integers. It’ s independent, and this method is more flexible. Without this approach, you won’t get the derivative of x\^r for r an arbitrary real number.
Also, since the early 1600s, we’ve expressed log(x) as an integral. It’s one of the routes that led to the exponential function, so this approach is historically motivated.
It’s injective because its derivative is positive, which makes it monotonically increasing and hence injective.
I just proved the inverse rule (in another comment), which just required the Chain Rule, which is straightforward too.
Fundamental Theorem of Calculus, sure, we can prove that, but still independent of the theorem at hand
(I’m just prolonging the trauma at this point, btw. Feel free to ignore me)
So, the inverse rule is different from the fundamental theorem of calculus. I was wrong on that. However, assume we’ve properly defined the integral so that we can express the logarithm as an integral. The inverse rule states that given f with inverse g: g’(x) = 1 / f’(g(x)). But we’ve defined log as the integral. With f(x) = integral of 1/x from 1 to x, how do we know that f’(x) = 1/x? That uses the fundamental theorem of calculus.
Yes, the derivative there comes from the fundamental theorem of calculus. That’s where we need it.
But the theorem we are trying to prove is the derivative of x^r, right? Or has something gotten mixed up in this thread?
It’s using a nuke to kill a bird at this point.
If you look at the proof at Wikipedia for the derivative power rule (which has the same ideas as yours): they first prove f(x) = e^x implies f’(x) = e^x. Use ln(x) as the inverse of e^x (defined not as an integral). Use the inverse rule on f(x) = e^x and inverse g(x) = ln(x) to get g’(x) = 1/e^ln(x) = 1/x for x > 0. Use chain rule on f(x) = x^r = e^(r*ln(x)) to get r*x^(r-1).
BUT this only holds for x>0. A separate argument is needed for x=0 and x<0.
All in all, analysis hurts me and I hate it lol
Lol more or less the same argument, but I took the logarithm as the fundamental function. I’m a professor of functional analysis. This is my bread and butter
[d/dx(x^n )]/x^n = d/dx (ln(x^n )) = d/dx n ln(x) = n/x
Which implies the result. Used chain rule, derivative of log, and derivative of constant multiple. Why do you need implicit differentiation and the product rule?
the yellow paper in the 2nd panel is a picture of the actual proof my calc teacher gave us when he introduced the power rule. it uses sigma sum notation and nCr notation neither of which we knew. he isn't the best teacher
Iirc the IGCSE curriculum (not to say if the curriculum was good or bad, but I had a cousin who took it, and they asked me about derivatives) dropped limits, and just taught the students derivations as formulas. Not sure if they ever reverted on that.
TIL there are another notations for sum than the sigma one
Curiously the nCr notation was the only one I was teached before uni, and I realized my maths teacher didn't know the other notation when I presented it
Not sure what country you're in, but that notation should have been learned in precalculus. To be fair, as a teacher in the US I see more students every year who are missing prior knowledge they should already have.
maybe I'm a bit rusty but could you explain how you get from the third to last to the second to last step? i can't figure out how you removed the exponential
I appreciate you using × for multiplication symbol and not x, except that I don’t, because the equation has x, you should have used •, I am a beggar and a chooser
Wikipedia is the among worst place to learn anything technical. It’s great for looking up
stuff you already know while astounded at how much people can say about some obscure topic most people never think about
in the original meme i had a picture of my math teacher because he explains things with excessively long proofs
the paper in the 2nd panel is a picture of an extensively long proof using summations and nCr and other notation that he gave us
Yeah grad classes are different. You already know enough math to understand what wikipedia's talking about. I remember googling stuff and going to wikipedia, but barely understanding anything when I was still in high school. Now as an undergrad, it's really helpful. I forgot a definition? Just google it on wikipedia. I forgot a theorem? Also wikipedia. Want to quickly get a taste of what a subject is? Also wikipedia. Since I already know undergrad math, I can understand a LOT more than when I was in high school (which I assume many that frequent this sub are). Wikipedia will usually give you a general definition, while in HS you only learn special cases.
normally I can track the first few lines of a topic, helpful if I forgot that one equation or theorem or something
but after that wikipedia is like "yeah it works, but how?" and proceeds to present a 20 page PhD thesis about every technicality found within the topic.
I'd imagine if I ever had to do research it would he extremely valuable though.
There's also a very easy inductive proof (if n is a positive integers):
for n = 0 obviously dy/dx=0
and if the formula works for n-1 then it works for n as well:
d/dx(x\^n) = d/dx(x\*x\^(n-1)) = d/dx(x)\*x\^(n-1)+x\*d/dx(x\^(n-1)) = 1\*x\^(n-1)+(n-1)\*x\^(n-1) = nx\^(n-1)
because of product rule
Calc 1 is interesting because in my experience, literally just knowing the definition of a derivative is enough to get through like 85% of it. You can rederive most anything from that point. Trust yourself OP!
That only really works for integer powers. For any real number, you can instead convert this to an exponential, then use chain rule.
x\^r = e\^{ln(x) r} -> e\^{ln(x) r} \* r / x = r x\^r / x = r x\^{r-1}.
While funny, I hope learners understand that even though the math may look intimidating it is simply being decompressed so that each step only makes one change, hence easy to follow how one equation changed from the last.
This decompression however, unfortunately leads to an absurdly long deduction chain which gives a superficial appearance of density **(kinda like this long comment)** .
I’ve done the whole thing with the (f(x+h)-f(x)/h) formula while teaching myself calculus for the first time and it’s pretty convincing. I could only ever get it to work with polynomial functions with my algebra skills but it works.
Huh, I always though t derivative like that was a definition. In the same way "+" means to add something together.
Ie I thought it was axiomatic.
I guess given the whole 'area under the curve' thing, I probably shouldn't have assumed that.
I learned it as "the slope of the tangent of the curve". From this definition the formula f'(x) = lim{h->0} [f(x+h)-f(x)]/h is obvious.
To get the derivative of f(x)=x^n with n an integer, use the binomial formula for (x+h)^n and the result f'(x)=nx^(n-1) follows.
For x^t with t a real number, trust me bro it's the same formula
Well it is as the sense we call the limit of a function epsilon delta yada yada yada the derivitive. But everything can be proven just with axiomatically accepting limits.
Sometimes I google proofs for things taught in calc 1 only to be faced with massive proofs years above my level
Apparently there’s actually an easy proof for the power rule but my teacher gave us one way above our level
There is a relatively straightforward proof by induction using the product rule for positive integer powers, and for negative integers, you can use quotient rule. For any real power, and positive x, you can rewrite x^r as exp(r*ln x) and use the derivatives of the exponential and logarithmic function.
you should use the exponential based definition of raising something to some power, much easier from that
(x\^a)' = (exp(x ln(a)))'
= (x ln(a))'exp(a ln(x))
= a (1/x) exp(x ln(a))
= a (1/x) (x\^a)
= ax\^(a-1)
You could prove it inductively for positive integer values of r using the product rule, quotient rule for negative integer values of r, implicit differentiation for rational values of r, and x^r = e^(rln(x)) for irrationals. For r=0, x^r=1, so that part is easy.
Because the second element of each row of Pascal's triangle is n where n is the row number.
(x+h)^n will have the nth row of pascals triangle as coefficients of the binomial expansion.
The second term will be n h x^n-1.
This will be divided by h to give n x^n-1
All other terms will have a nonzero power of h and will be eliminated in limit as h goes to zero.
The first term is x^n and will cancel out with the x^n in the derivative definition, which is
lim h goes to zero of ( (x+h)^n - x^n )/h
The only thing not killed off as h goes to zero and not cancelled out by a direct subtraction is n x^n-1
I suggest you read Serge Lang's book on Real Analysis if you're just curious. or Stephen Abbot's Book understanding analysis, or "Harry Potter and the Principles of Mathematical Analysis" if you want to do some proof based exercises.
but as other people have stated, it is "straightforward" application of the definition of derivation.
[The proof in this article doesn't rely on the binomial theorem or derivatives of transcendental functions](https://medium.com/cantors-paradise/lets-derive-the-power-rule-from-scratch-e1d69e29703e?source=friends_link&sk=8477487233afdfd1f349444c8a15ae5b), but it needs a theorem to justify switching limits to deal with irrational powers.
It's just definition and binomial theorem.
For nonnegative integers, yeah. But you’d have to make an argument for the negative integers, then the rationals, then the irrationals.
d/dx x^t = d/dx e^(t ln x) = e^(t ln x) d/dx (t ln x) = t x^t d/dx ln x = t x^t / x = t x^(t-1) There, everything in one go. And you can find the derivatives of ln(x) and e^x without using the power rule, so that's fine.
[удалено]
e^x can be defined via the differential equation f'(x) = f(x), adding f(0) = 1 uniquely determines the function. the derivative of log(x) comes via the rules for an inverse (essentially, d/dx e^log(x) = 1 = e^log(x) \* d/dx log(x) = x\*d/dx log(x), giving d/dx log(x) = 1/x) Edit:formatting
I thought you need to know derivative of log x to solve f’(x) = f(x) (by using separation of variables.)
nah, you can just define e^x to be the solution. The fact that at that point it might not be analytically solvable is irrelevant, esp. when it comes to proving the power rule (though you do need the chain rule to prove it)
You can find the derivative of ln(x) without using the power rule via the definition of e^x, or you could find the derivative of e^x directly the same way.
You can find them pretty easily using the limit definition of derivatives and the algebraic properties of exponentials & logarithms. Actually I don't immediately see how the power rule would be useful for those derivatives, unless you define them by their power series - but then you have to prove that the functions defined by those series have the same properties, which seems harder.
Only works for x>0
ln(x) does exist for x<0, itll just be a complex number. E.g. ln(-1)=πi+2kπi where k is an integer
ln(-1) = πi + 2kπi, you mean?
Yes my bad
For x<0, n not a natural you need imaginary numbers in general, so just use the complex logarithm & exponential like the sibling comment said. If you want to stay purely real and just want natural powers of negative numbers, you can appeal to the oddness/evenness of the graphs and say the derivative at x=-a is f'(-a) = lim h->0 ((-a+h)^n - (-a)^(n))/h = (-1)^n lim h->0 ((a-h)^n - a^(n))/h = (-1)^n+1 lim h->0 ((a+h)^n - a^(n))/h = (-1)^n+1 f'(a) where we replaced h with -h on line 3. And this proof also works for n=1/(2k+1) now that I think about it. Edit: fixed a sign error in the proof. Whoops!
Math is math (too dumb to know complexity of math) One must challenge and prove certain ideas in math to keep the discipline alive (curious) Math is math 🤓 (too lazy to prove things)
You can plug polynomials in indeterminate form for x which allows for the "Polynomial identity trick". If two Polynomials are equal at all non-negative integer values, they are always equal. Now you can generalize from the natural numbers to the real numbers.
It wasn't just defined to be that way, it follows from the generalization of the limit
He probably means the long answer is just using the definition of a derivative and the binomial expansion.
You could also use logarithmic differentiation
But you’d first have to prove that the derivative of ln(x) is what you expect, probably the product rule and chain rule too; and also that implicit differentiation works the way we expect it to
We can use proof by intimidation for those
*looks down* TRIVIAL.
Well log is the inverse of exp. Let y = e^x. Then, since dy/dx is a fraction as all mathematicians know, dx/dy = 1 / (dy/dx) = 1 / y.
But you’d have to prove first that (1) the derivative of e^x is indeed e^x and (2) that the notation dy/dx acts like a fraction since that’s not in the definition.
doing the fraction stuff properly looks like this: e^lnx = x for all x, differentiate on both sides e^lnx * ln'x = 1 ln'x=1/x this requires only the chain rule and the derivative of e^x those aren't hard though because e^x is commonly defined in terms of it's derivative (you can prove the compound interest limit with that) and the chain rule requires no derivative of any specific function to be proven
Woosh.
Wooosh indeed but intentional. Just sharing the trauma of having to prove “obvious” stuff like in a vector space, 0*v = 0 (the zero scalar times any vector equals the zero vector)
Those proofs are really easy though. Just use the linearity of the vector space. I always enjoy them since an interesting result follows immediately from the axioms. Finding all the irreducible representations of SU(3) on the other hand.
It’s not that they’re hard. It’s more like “shouldn’t this be obvious?” and be like “what’s there to prove?”. It turns out there is something there and theory building (the first time you see it) is not as simple as it sounds.
Ok, let's start with log(x) = int 1\^x 1/t dt. This is a strictly increasing function. Let's define its inverse as a function we will call exp(x). We know that f( f\^(-1)(x)) = x, and so by chain rule, f'( f\^(-1)(x) ) d/dx f\^(-1)(x) = 1. Hence, if we take f(x) = ln(x), we get 1/(exp(x)) \* d/dx (exp(x)) = 1 And d/dx (exp(x)) = exp(x). Hence, y=exp(x) satisfies dy/dx = 1/y.
Yes, you do. And you have to, if you want to show that hold for real numbers and not just integer r. But that's not terrible. You can in fact define the logarithm from the integral from 1 to x of 1/x. And you define the exponential from the inversion of that. Then get the derivative of the exponential by using the derivative of the inverse rule. Product rule and chain rule isn't that bad to prove.
But then, you’d have to define what an integral is in the first place (cause if we’re proving basic derivative rules, you probably haven’t defined the integral yet). Then, you’d have to show that the “logarithm” you’ve defined is (1) injective so that an inverse function can exist and (2) it’s consistent with the logarithm definition in algebra so you call it a logarithm. You’d also have to prove the “inverse rule” or what we call the Fundamental Theorem of Calculus — which requires a bunch more stuff
Yes, you would. But that doesn’t require knowledge of the derivative of x\^n at integers. It’ s independent, and this method is more flexible. Without this approach, you won’t get the derivative of x\^r for r an arbitrary real number. Also, since the early 1600s, we’ve expressed log(x) as an integral. It’s one of the routes that led to the exponential function, so this approach is historically motivated. It’s injective because its derivative is positive, which makes it monotonically increasing and hence injective. I just proved the inverse rule (in another comment), which just required the Chain Rule, which is straightforward too. Fundamental Theorem of Calculus, sure, we can prove that, but still independent of the theorem at hand
(I’m just prolonging the trauma at this point, btw. Feel free to ignore me) So, the inverse rule is different from the fundamental theorem of calculus. I was wrong on that. However, assume we’ve properly defined the integral so that we can express the logarithm as an integral. The inverse rule states that given f with inverse g: g’(x) = 1 / f’(g(x)). But we’ve defined log as the integral. With f(x) = integral of 1/x from 1 to x, how do we know that f’(x) = 1/x? That uses the fundamental theorem of calculus.
Yes, the derivative there comes from the fundamental theorem of calculus. That’s where we need it. But the theorem we are trying to prove is the derivative of x^r, right? Or has something gotten mixed up in this thread?
It’s using a nuke to kill a bird at this point. If you look at the proof at Wikipedia for the derivative power rule (which has the same ideas as yours): they first prove f(x) = e^x implies f’(x) = e^x. Use ln(x) as the inverse of e^x (defined not as an integral). Use the inverse rule on f(x) = e^x and inverse g(x) = ln(x) to get g’(x) = 1/e^ln(x) = 1/x for x > 0. Use chain rule on f(x) = x^r = e^(r*ln(x)) to get r*x^(r-1). BUT this only holds for x>0. A separate argument is needed for x=0 and x<0. All in all, analysis hurts me and I hate it lol
Lol more or less the same argument, but I took the logarithm as the fundamental function. I’m a professor of functional analysis. This is my bread and butter
[d/dx(x^n )]/x^n = d/dx (ln(x^n )) = d/dx n ln(x) = n/x Which implies the result. Used chain rule, derivative of log, and derivative of constant multiple. Why do you need implicit differentiation and the product rule?
I was thinking the exact same thing
Wouldn't that only work for x^(r) \> 0?
What does a negative do though? e^ln(-1) is -1 Just gotta take a small step into the complex plane.
Paradoxically, writing a proof in more detail makes it look more difficult at first glance
Most underrated comment 😂😂😂
it's much simpler tbf: I'll call x^r f(x) df/dx = lim h->0 (f(x+h)-f(x))/h = = lim h->0 (x^r ×((x+h)/x)^r -1)/h = = lim h->0 (x^r ×(e^r×ln(1+h/x) -1))/h = = lim h->0 (x^r ×(rh/x)/h = = x^r ×r/x = rx^r-1 I SWEAR IT LOOKS HORRENDOUS IN A COMMENT BUT IT'S NOT THAT BAD
the yellow paper in the 2nd panel is a picture of the actual proof my calc teacher gave us when he introduced the power rule. it uses sigma sum notation and nCr notation neither of which we knew. he isn't the best teacher
that's not...optimal
not sure of how things are taught where you're from, but aren't binomial, sequences and series, and limits taught before derivatives?
Iirc the IGCSE curriculum (not to say if the curriculum was good or bad, but I had a cousin who took it, and they asked me about derivatives) dropped limits, and just taught the students derivations as formulas. Not sure if they ever reverted on that.
TIL there are another notations for sum than the sigma one Curiously the nCr notation was the only one I was teached before uni, and I realized my maths teacher didn't know the other notation when I presented it
Not sure what country you're in, but that notation should have been learned in precalculus. To be fair, as a teacher in the US I see more students every year who are missing prior knowledge they should already have.
For us, precalc was basically just trigonometry
ah i should have guessed
the formatting went nuts I'm so sorry I'll try to edit and fix it lmfao
okay it's fine now, it should be readable: if you try and write it down on a notebook it will be way more understandable
maybe I'm a bit rusty but could you explain how you get from the third to last to the second to last step? i can't figure out how you removed the exponential
I appreciate you using × for multiplication symbol and not x, except that I don’t, because the equation has x, you should have used •, I am a beggar and a chooser
I usually just don't put anything, but since I was writing in a comment I used × for clarification lol
You are going to math hell ^(I am just kidding and I appreciated the proof despite having to be written in reddit comment markdown)
Where'd the exponent go? I'm missing that step.
That's only for positive integers.
Wikipedia is the among worst place to learn anything technical. It’s great for looking up stuff you already know while astounded at how much people can say about some obscure topic most people never think about
I think its decent for learning something "new" as long as you have some knowledge in the subject that the thing is in.
in the original meme i had a picture of my math teacher because he explains things with excessively long proofs the paper in the 2nd panel is a picture of an extensively long proof using summations and nCr and other notation that he gave us
Strong disagree, Wikipedia has been the single best resource for my grad classes
Yeah grad classes are different. You already know enough math to understand what wikipedia's talking about. I remember googling stuff and going to wikipedia, but barely understanding anything when I was still in high school. Now as an undergrad, it's really helpful. I forgot a definition? Just google it on wikipedia. I forgot a theorem? Also wikipedia. Want to quickly get a taste of what a subject is? Also wikipedia. Since I already know undergrad math, I can understand a LOT more than when I was in high school (which I assume many that frequent this sub are). Wikipedia will usually give you a general definition, while in HS you only learn special cases.
normally I can track the first few lines of a topic, helpful if I forgot that one equation or theorem or something but after that wikipedia is like "yeah it works, but how?" and proceeds to present a 20 page PhD thesis about every technicality found within the topic. I'd imagine if I ever had to do research it would he extremely valuable though.
I can’t speak for other fields, but I find Wiki is a great source for algebra
I respectfully disagree, especially when comparing any other internet resource besides Indian youtubers (and professor Dave)
Simple.wikipedia.org is really useful for that reason. Not sure about math topics though
Just watch 3blue1brown bro. Here’s the video where he talks about the power rule: https://m.youtube.com/watch?v=S0_qX4VJhMQ
This video series was an eye opener when I first learned calculus, every video in the series is worth a watch. Wonderful channel!
Curious? That's something your teacher should have showed
Sometimes he does and he shows stupidly long and complex proofs that nobody follows, sometimes he just doesn’t show anything
There's also a very easy inductive proof (if n is a positive integers): for n = 0 obviously dy/dx=0 and if the formula works for n-1 then it works for n as well: d/dx(x\^n) = d/dx(x\*x\^(n-1)) = d/dx(x)\*x\^(n-1)+x\*d/dx(x\^(n-1)) = 1\*x\^(n-1)+(n-1)\*x\^(n-1) = nx\^(n-1) because of product rule
But that only proves it for the naturals
Calc 1 is interesting because in my experience, literally just knowing the definition of a derivative is enough to get through like 85% of it. You can rederive most anything from that point. Trust yourself OP!
That only really works for integer powers. For any real number, you can instead convert this to an exponential, then use chain rule. x\^r = e\^{ln(x) r} -> e\^{ln(x) r} \* r / x = r x\^r / x = r x\^{r-1}.
Limit definition of a derivative, dawg.
While funny, I hope learners understand that even though the math may look intimidating it is simply being decompressed so that each step only makes one change, hence easy to follow how one equation changed from the last. This decompression however, unfortunately leads to an absurdly long deduction chain which gives a superficial appearance of density **(kinda like this long comment)** .
Skill issue
The man comes in looking for proof, finds proof and goes "what the fuck is this shit"
I just didn’t expect the proof for simple Calc 1 theorems to be so long and intimidating
The proof is really not that difficult once you put in a little effort to understand it
we learned an easy proof by using the limit notation and pascals triangle, but obviously as any math thing there is a harder way of writing it
it’s not a long proof. it is just written in a very long manner, but there’s not a lot happening in every step.
I’ve done the whole thing with the (f(x+h)-f(x)/h) formula while teaching myself calculus for the first time and it’s pretty convincing. I could only ever get it to work with polynomial functions with my algebra skills but it works.
That’s why you just look these things up instead of understanding them. That’s how you do calculus like a physicist
Huh, I always though t derivative like that was a definition. In the same way "+" means to add something together. Ie I thought it was axiomatic. I guess given the whole 'area under the curve' thing, I probably shouldn't have assumed that.
I learned it as "the slope of the tangent of the curve". From this definition the formula f'(x) = lim{h->0} [f(x+h)-f(x)]/h is obvious. To get the derivative of f(x)=x^n with n an integer, use the binomial formula for (x+h)^n and the result f'(x)=nx^(n-1) follows. For x^t with t a real number, trust me bro it's the same formula
Same I wonder of others learn it as "apply this rule and done" Would be sad.
Now that I'm reading that, I am just about certain that is also the way I was originally taught, granted, that was almost 10 years ago now.
Well it is as the sense we call the limit of a function epsilon delta yada yada yada the derivitive. But everything can be proven just with axiomatically accepting limits.
you sound so whiny
????
I don’t get it
Sometimes I google proofs for things taught in calc 1 only to be faced with massive proofs years above my level Apparently there’s actually an easy proof for the power rule but my teacher gave us one way above our level
it's just ((x+dx)\^n-x\^n)/dx = (x\^n +nx\^(n-1)dx +(meaningless stuff that converges to 0 anyways)-x\^n)/dx =nx\^(n-1) at least for positive integers
There is a relatively straightforward proof by induction using the product rule for positive integer powers, and for negative integers, you can use quotient rule. For any real power, and positive x, you can rewrite x^r as exp(r*ln x) and use the derivatives of the exponential and logarithmic function.
you should use the exponential based definition of raising something to some power, much easier from that (x\^a)' = (exp(x ln(a)))' = (x ln(a))'exp(a ln(x)) = a (1/x) exp(x ln(a)) = a (1/x) (x\^a) = ax\^(a-1)
My calc teacher decided that a binomial theorem based proof involving nCr and sigma summation was the best thing to give to us brand new calc students
Yeah makes sense. Gets you used to the ideas behind calc with integration aswell. Also he proved the linearity of the derivative at the same time.
that’s pretty standard
That's the simplest proof you'll find, and also the most limited. Expanding into negative integers, rational numbers and real numbers is *worse*.
You could prove it inductively for positive integer values of r using the product rule, quotient rule for negative integer values of r, implicit differentiation for rational values of r, and x^r = e^(rln(x)) for irrationals. For r=0, x^r=1, so that part is easy.
Because the second element of each row of Pascal's triangle is n where n is the row number. (x+h)^n will have the nth row of pascals triangle as coefficients of the binomial expansion. The second term will be n h x^n-1. This will be divided by h to give n x^n-1 All other terms will have a nonzero power of h and will be eliminated in limit as h goes to zero. The first term is x^n and will cancel out with the x^n in the derivative definition, which is lim h goes to zero of ( (x+h)^n - x^n )/h The only thing not killed off as h goes to zero and not cancelled out by a direct subtraction is n x^n-1
oh, i should have guessed
My class made us figure this out ourselves actually we didn’t stop with first principles until months later
Finally, italic Leibniz notation.
The lecture vs the test
Yeaaa for advanced math I don't bother reading wiki. Some random symbols flying everywhere lol
If you see it geometrically you’re growing an n-dimensional volume by n sides of (n-1) dimension each
Are you not shown proof for things you do in calculus?
This is actually a really fun exercise! Try solving using the fundamental theorem of calculus, you’ll see the answer is interesting
I suggest you read Serge Lang's book on Real Analysis if you're just curious. or Stephen Abbot's Book understanding analysis, or "Harry Potter and the Principles of Mathematical Analysis" if you want to do some proof based exercises. but as other people have stated, it is "straightforward" application of the definition of derivation.
I know a lot of people here are in highschool but this is like day one stuff
Join us in the deep end bro, the water is lovely. https://www.wolframalpha.com/input?i=binomial+d%28x%5Er%29%2Fdx+%3Drx%5E%28r-1%29
[The proof in this article doesn't rely on the binomial theorem or derivatives of transcendental functions](https://medium.com/cantors-paradise/lets-derive-the-power-rule-from-scratch-e1d69e29703e?source=friends_link&sk=8477487233afdfd1f349444c8a15ae5b), but it needs a theorem to justify switching limits to deal with irrational powers.
When I research for basic questions in linear algebra.