Pages

Functional Equation Involving Integrals


I'm working on a very interesting problem which I will post these days. In the meantime, here's a nice functional equation that I solved yesterday. This problem, in a changed form, was shortlisted for the 2015 Romanian NMO (National Maths Olympiad). The problem goes as follows:

Find all the functions \(f:[0,1]➝ℝ\) such that \(\int_0^x f(t)dt = f(x)^n+f(x)\), for a fixed odd n. 


Solution:

Let us consider the function \(z(x)=x^n+x\).

\(\frac{dz}{dx}=x^{n-1}+1>0\), so z is strictly increasing. It is obvious that the function is surjective, so \(z\) is bijective, and, thus, it has an inverse. Returning to the equation at hand, $$z(x)^{-1}∘(\int_0^x f(t)dt)=f(x)$$and, since the integral and z are differentiable, f is also differentiable, and, thus, continuous.

Now that we know that \(f\) is differentiable, we can differentiate the equation with respect to \(x\), obtaining that 
$$f(x)=nf(x)^{n-1}f '(x)+ \text{f '(x)   (I)} $$

Let us look now at the set \(A=[x|f([0,x])=0]\). By putting \(x=0\) in the equation, we can deduce that \(f(0)=0\), so A is not void. \(A\) is also compact, so it has a supremum, M.  By the inertia theorem, there is an interval \([M,x_0]\) such that the function keeps constant sign on that interval, with \(x_0\) being the smallest possible number with this property. If \(x_0=1\), then we have no problem. If \(x_0<1\), then it means that \(f(x_0)=0\). However, by looking at equation \((I)\), we can deduce that, if \(f(x)>0\), then \(f '(x)>0\); if \(f(x)=0\), then \(f '(x)=0\); if \(f(x)<0\), then \(f '(x)<0\). Since the sign of the function is constant on \([M,x_0]\), the function is increasing on \([M,x_0]\), and, thus, it is 0 on the interval, so \(x_0\) would be the supremum of A, which contradicts the way we chose M. This implies that the function keeps constant sign after M, and, thus, is either increasing or decreasing after M.

From now on, we shall focus on proving that \(M=1\). For the simplicity of calculations, we shall write \(f(x)=y\). If \(y>0\), then \(y'>0\). We can rearrange (I) into the form 
$$1=ny^{n-2}y'+ \frac{y'}{y}$$ 
and, by integrating, we get that 
$$\frac{n}{n-1}y^{n-1}+ln(y)-x+k=0$$
However, due to the continuity of the function, $$\lim_{x\to M}\frac{n}{n-1}f(x)^{n-1}+ln(f(x))-x+k=-∞<0$$which is a contradiction. Thus, \(f\) isn't bigger than 0, and, similarly, \(f\) isn't smaller than 0, which means that \(f(x)=0, ∀x∊[0,1]\).

The problem was quite intuitive if you think about it. It seemed pretty obvious that \(f\) can't be different than 0, but that's how most hard analysis problems are: you are asked to prove something that is intuitively correct, but incredibly hard to formalize. This exercise wasn't particularly hard, but I found it pretty instructive and rather fun. Tune in next time for an article about some isomorphism on groups of integers matrices (if I manage to learn enough Latex to make the hats (^) above classes modulo p).

Cris.

No comments:

Post a Comment