One minor nit: A function can be differentiable at a and discontinuous at a even with the standard definition of the derivative. A trivial example would be the function f(x) = (x²-1)/(x-1) which is undefined at x=1, but f'(1)=1 (in fact derivatives have exactly this sort of discontinuity in them which is why they’re defined via limits). In complex analysis, this sort of “hole” in the function is called a removable singularity¹ which is one of three types of singularities that show up in complex functions.
⸻
1. Yes, this is mathematically the reason why black holes are referred to as singularities.
> this sort of “hole” in the function is called a removable singularity
It's called "removable" because it can be removed by a continuous extension - the original function itself is still formally discontinuous (of course, one would often "morally" treat these as the same function, but strictly speaking they're not). An important theorem in complex analysis is that any continuous extension at a single point is automatically a holomorphic (= complex differentiable) extension too.
I don't think it makes sense to allow derivatives of a function f to have a larger domain than the domain of f.
>which is why they’re defined via limits
They're defined via studying f(x+h) - f(x) with a limit h -> 0. But, your example is taking two limits, h->0 and x->1, simultaneously. This is not the same thing.
You are wrong. In order for you to make sense of what you are saying, you first must REDEFINE f(x) to be f(x) = (x^2 - 1)(x - 1) when x != 1 and define f(1) = 2. Of course, then f will be continuous at x = 1 also.
A function is continuous at x = a if it is differentiable at x = a.
You do understand the concept, but your precision in the definitions is lacking.
I'm not understanding what you're saying. The standard definition of the derivative of f at c is
f'(c) = lim_{h → 0} (f(c + h) - f(c))/h
The definition would not make sense if f wasn't defined at c (note the "f(c)" in the numerator). For instance, it can't be applied to your f(x) = (x² - 1)/(x - 1) at x = 1, because f(1) is not defined.
And it's a standard result (even stated in Calc 1 classes) that if a function is differentiable at a point, then it's continuous there. For example:
5.2 Theorem. Let f be defined on [a, b]. If f is differentiable at a point x ∈ [a, b], then f is continuous at x.
(Walter Rudin, "Principles of Mathematical Analysis", 3rd edition, p. 104)
Or:
Theorem 2.1 If f is differentiable at x = a, then f is continuous at x = a.
(Robert Smith and Roland Minton, "Calculus -Early Transcendentals", 4th edition, p. 140)
It's true that your f(x) = (x² - 1)/(x - 1) has a removable discontinuity at x = 1, since if we define g(x) = f(x) for x ≠ 1 and g(1) = 2, then g is continuous. Was this what you meant?