2010-08-30 11:24:04 8 Comments

There are some math quizzes like:

find a function $\phi:\mathbb{R}\rightarrow\mathbb{R}$ such that $\phi(\phi(x)) = f(x) \equiv x^2 + 1.$If such $\phi$ exists (it does in this example), $\phi$ can be viewed as a "square root" of $f$ in the sense of function composition because $\phi\circ\phi = f$. Is there a general theory on the mathematical properties of this kind of square roots? (For instance, for what $f$ will a real analytic $\phi$ exist?)

### Related Questions

#### Sponsored Content

#### 0 Answered Questions

### principal root confusion

**2018-07-29 15:54:06****Carlos Bacca****50**View**0**Score**0**Answer- Tags: functions

#### 1 Answered Questions

### [SOLVED] Functional Square Root of Digit Sum?

**2018-06-12 23:12:45****Frpzzd****106**View**4**Score**1**Answer- Tags: recreational-mathematics functional-equations decimal-expansion fractional-iteration

#### 1 Answered Questions

### [SOLVED] Functional Square Root of Piecewise Functions

**2017-10-14 16:46:41****Frpzzd****138**View**1**Score**1**Answer- Tags: functions function-and-relation-composition piecewise-continuity involutions fractional-iteration

#### 2 Answered Questions

### [SOLVED] Functional square root of $f(x)=x^2-1$???

**2017-09-08 03:02:36****Sam****863**View**0**Score**2**Answer- Tags: functions functional-equations function-and-relation-composition

#### 2 Answered Questions

### [SOLVED] The 'Square root' Function

**2016-10-04 14:56:32****Akshay Hegde****953**View**14**Score**2**Answer- Tags: real-analysis functions

#### 1 Answered Questions

### [SOLVED] Characterising functions $f$ that can be written as $f = g \circ g$?

**2010-07-29 07:15:46****bryn****1064**View**23**Score**1**Answer- Tags: functions elementary-set-theory function-and-relation-composition

#### 0 Answered Questions

### Smoothness of a map in a manifold

**2016-08-20 07:42:43****Dark_Knight****52**View**1**Score**0**Answer- Tags: smooth-manifolds function-and-relation-composition

#### 1 Answered Questions

### [SOLVED] Functional Square Root of Derivative

**2016-08-04 05:52:53****eepperly16****103**View**1**Score**1**Answer- Tags: real-analysis derivatives

#### 0 Answered Questions

### Iterative (functional) roots of integer functions (functions on $\mathbb{Z}$)

**2015-04-05 11:13:45****Mario****93**View**2**Score**0**Answer- Tags: functional-equations

#### 1 Answered Questions

### Is the composition of a measurable function with a monotone function measurable?

**2014-10-27 18:03:27****madlin****152**View**0**Score**1**Answer- Tags: real-analysis measure-theory

## 8 comments

## @Gottfried Helms 2016-06-18 07:44:00

[this is a second comment on sheldon's answer [comment-2]]There is a (relatively) simple q&d-approximation for the finding of the power series of the $h(x)$-function where $h(h(x))=f(x)$. It exploits the method of Carlemanmatrices (in this case the Carlemanmatrix

for $f(x)$) and used diagonalization to compute the squarerootFof that Carlemanmatrix which contains the approximation to $h(x)$'s (truncated) power series by a polynomial. Here I show the approximation of $h_n(x)$ which means a polynomial of orderHnintended to approximate $h(x) = \lim_{n \to \infty} {h_n}(x)$ bywith sizeHnn x n. The computation in Pari/GP is straightforward, we only need the user-defined procedure to construct the Carlemanmatrix to sizen:This simple procedure gives a nice approximation of a powerseries for $h(x)$ and surprisingly is near to Sheldon's solution and seems to approximate it with increasing size

n. Here is a comparision of Sheldon's coefficients (left column $h(x)$) and the differences of the coefficients of $hn(x)$ when computed for variousn:Remark: For me the question arises whether it is indeed the case, that this procedure approximates the given Abel-/Boettcher-method solution and whether/how this can be proven (because I had the similar observation with other functions $f(x)$) I think this would give some more insight in the functionality of the Abel-/Bottcher method and were a big step to the intuitiveness of its application.

[Update]On Sheldon's request to check to which of his solutions (Kneser-type or Böttcher/Abel-type) this approximates best:It is difficult to prognose to which of the Abel/Böttcher or Kneser solution the diagonalization-approach converges. I calculated the truncated powerseries to 256 terms (by the 256 x 256 Carlemanmatrix) giving that results (the coefficient at $x^0$):

Surprisingly the Kneser-like solution shown in Sheldon's answer has nonzero entries at the coefficients of odd indexes, so maybe that solution could have an even better numerical approximation itself. From the exercises with the suqare-root of the $\exp()$-function I'd tend to the prognosis, that the diagonalization shall come out to the Kneser-solution (if Kneser and Böttcher is really different), but who knows... Unfortunately, the diagonalization of a 256 x 256 Carleman matrix needs high float precision in the computation ( 400 dec digits seemed to be not sufficient, so I used 800 dec digits) and computation took 1 and half hours... So this procedure is surely only of theoretical interest, just whether the Kneser or Böttcher/Abel solution is asymptotically equivalent to the diagonalization of an infinite Carleman-matrix.

## @Sheldon L 2016-06-19 02:44:38

Hey Gottfried; a long time ago, I generated a solution; actually two of them, for this problem. One solution, I called Kneser, as a0=0.64209475250660937, the other solution, I called Botcher has a0=0.64209450439082838. Which one is your Carleman Matrix solution approaching? Also, I wrote a pari-gp program called "fatou.gp", which is posted on the tetration forum. fatou.gp will also solve the Kneser type Abel function for $f(x)=x^2+x+k$, using "x2mode" . For problem at hand, we solve $f2(y)=y^2+y+\frac{3}{4}$ where $y=x+0.5$ and $f2(y)=f(x)+0.5$. There is even a half(z) function included.

## @Gottfried Helms 2016-06-19 11:58:23

@Sheldon - at the question: which is the diagonalization approximating: please see my new update.

## @Sheldon L 2016-06-19 12:28:35

nonzero entries in Kneser solution for "odd" coefficients are just precision artifacts. Notice how small they are. They give an indication of the precision of the result though.

## @Sheldon L 2016-06-19 12:49:04

I understand the simultaneous equations solution Andrew generated for the slog; but I am not in general so good with matrices. Could you use the Carlman matrix approach to generate the $\alpha$ Abel function for x^2+1? If so, then it might be the same as Andrew's slog, which seems to converge slowly to Kneser's solution. It helps to use Jay's speedup since near the two fixed points; $0.5 \pm i\sqrt{0.75}$, the singularity behaves somewhat like the Schroder function solution from the complex fixed point.

## @Sheldon L 2013-08-11 14:52:48

A solution can be generated based on the Abel function, $\alpha(z)$ of $f(z)=z^2+1$. If we have the Abel function, then the half iterate of z can be generated as $h(z)=\alpha^{-1}(\alpha(z)+0.5)$ Though it is not the only solution (nor in my opinion, the best), the most accessible Abel function solution is based on a Boettcher function for the fixed point at infinity; I wrote a program last year that does just that, from which the results I posted earlier were quickly generated. It is easiest to work with the inverse Boettcher function, from which the inverse Abel function can easily be generated. I'm using use the symbol $\beta$ for the Boettcher function. The problem is that f(x) actually has a super attracting fixed point at infinity, not a fixed point at zero. So, we work with the reciprocal of the $\beta^{-1}$ function. We defined the formal $\beta^{-1}$ function via the following relationship.

$\beta^{-1}(z^2)=\frac{1}{f( \, 1 \, / \, {\beta^{-1}(z) \, })}$

First generate the formal power series for the reciprocal function, 1/f(1/z), which allows one to generate the formal $\beta^{-1}$ series.

$fi(z)=\frac{1}{f(\frac{1}{z})} = z^2 - z^4 + z^6 - z^8 + z^{10} - z^{12} ...$

${\beta^{-1}(z^2)}=fi({\beta^{-1}(z)})$

Now, all you need is the formal power series for the $\beta^{-1}(z)$, along with the equation for the inverse Abel function, in terms of the Boettcher function, and the equation to generate the half iterate in terms of the Abel function, $h(z)=\alpha^{-1}(\alpha(z)+0.5)$.

$\alpha^{-1}(z)=\frac{1}{\beta^{-1}(\exp(-2^{z}))}$

$\beta^{-1}(z)=$

So, this yields an approximate solution for the superfunction or $\alpha^{-1}(z)$ of $\exp(2^z)$, which is the superfunction for x^2. This approximation is modified by the Boettcher function to become exactly, $\frac{1}{\beta^{-1}(\exp(-2^z)}$. Notice that as z increases, $\exp(-2^z)$ rapidly goes to zero, as long as $|\Im(z)|<\frac{\pi}{2\log(2)}$, and the approximation for the superfunction becomes more and more accurate. This is the Taylor series centered so that $\alpha^{-1}(0)=2$. $\alpha^{-1}(z)=$

The Abel function, and its inverse the superfunction=$\alpha^{-1}(z)$, combine to yield a valid solution for the half iterate using numerical methods to get a Taylor series for $h(z)=\alpha^{-1}(\alpha(z)+0.5)$. I prefer the Cauchy integral, to generate each coefficient of the Taylor series for the half iterate. So below this paragraph is the half iterate, generated by putting iterations of $x^2+1$ into correspondence with iterations of the $x^2$ via the Boettcher super attracting fixed point of infinity/zero. My main reason for preferring the Kneser type solution is that the superfunction generated from the Kneser type solution has no singularities in the upper half of the complex plane, where as the Bottcher function solution is not nearly so well behaved, with an infinite number of singularities as $|\Im(z)|$ approaches $\frac{\pi}{2\log(2)}$. But the Kneser solution requires a Riemann mapping so it is not as accessible as this Boettcher function solution. At the real axis, both functions are very close in values to each other. I haven't studied the half iterates of either in much detail; although the nearest singularity defines the radius of convergence, $\sqrt{1-a_0}\approx 0.598252i$, as noted in my comments above. Here is the half iterate, $h(z)$, for $f(z)=z^2+1$. Notice that the radius of convergence is a little bit too small, so that $h(h(z))$ doesn't converge to $z^2+1$.

update for Gottfried June 18 2016A long time ago, I generated a solution; actually two of them, for this problem. One solution, I called Kneser, as a0=0.64209475250660937, the other solution, I called Botcher has a0=0.64209450439082838. Which one is your Carleman Matrix solution approaching? Also, I wrote a pari-gp program called "fatou.gp", which is posted on the tetration forum. fatou.gp will also solve the Kneser type Abel function for $f(x)=x^2+x+k$, using "x2mode" . For problem at hand, we solve $f2(y)=y^2+y+\frac{3}{4}$ where $y=x+0.5$ and $f2(y)=f(x)+0.5$. There is even a half(z) function included in fatou.gp!This is how to generate the Kneser style half iterate from the Kneser Abel function, using the fatou.gp program from http://math.eretrandre.org/tetrationforum/showthread.php?tid=1017

## @Gottfried Helms 2016-06-18 07:56:05

I'm adding a new comment to this answer, but because of the number of required bytes I'll make it an "answer", labeled "

"[comment-2]## @Sheldon L 2016-06-19 03:13:53

@GottfriedHelms I added the code sequence to generate the kneser Abel solution for $f(x)=x^2+x+0.75$, and use that to generate the kneser half iterate for $x^2+1$

## @Gottfried Helms 2013-08-10 21:22:08

[this is a comment on sheldon's answer[comment-1]]This is an additional example for my last comment @Sheldon made as an "answer" because of number of bytes required. The range of convergence of the (first) Kneser-type "ht(x)"-function in Sheldon's comment can be extended by Eulersummation; this can be implemented by simply introducing suitable cofactors for the power series. Here is Pari-code:

and here the comparision based on the 11 terms only:

The best order of the Euler-summation depends on the value of the x-argument; for instance to sum the alternating series $1-2+4-8+...-...$ we need order 2 and to sum $1-3+9-27+...-...$ we need order 3. The procedure for the computation of the vector is relatively simple; I can put it here if wanted.

## @Sheldon L 2013-08-11 12:12:54

Time is limited this morning. The solution I gave has a radius of converegence limited by a singularity at $z=\sqrt{a_0-1} \approx 0.598252i$ This is due to a simple square root branch, since $z^2+1=a_0$, and $h(z)=0$. $f(0)=z^2+1$, which seems to cause a simple square root branch singularity, which seen in the half iterate of z, since $h(z)=\sqrt{h(z^2+1)-1}$. So the radius of convergence really is limited to $\approx 0.598252$, and there's nothing you can do to improve the convergence since the function becomes multiple valued. But $\alpha^{-1}(\alpha(z)+0.5)$ solution is well defined.

## @Sheldon L 2013-08-11 12:18:39

I'll post the Bottcher function formal power series solution for the fixed point at infinity later.

## @Gottfried Helms 2010-09-04 13:42:42

(

Update: Oh sorry; I'm new here. Didn't notice the much more substantial link to the mathoverflow. But perhaps it is interesting for some newbie anyway...)If the function is a polynomial (or powerseries) with constant term $\ne0$ this is difficult, but sometimes a meaningful solution can be given.

If the function is as above, but the constant term is zero, then it is just the question of relatively simple manipulation of the formal powerseries/polynomial to construct a "compositional" (or "iterative") root.

There is a very good deal in L.Comtet "Advanced Combinatorics" at pages around 130..150 (don't have it around).

Also the keywords "Bell-matrix", "Carleman-matrix" are helpful: these matrices transform the problem of funtion composition/iteration to matrix-products/matrix-powers. (The matrix-notation just implies the required formal powerseries-manipulations) . This is well established for functions $f(x)= ax + bx^2 + cx^3 +\cdots$.

If the constant term does not vanish, $f(x) = K + ax + bx^2 + cx^3 +\cdots$, then things are usually much more difficult. But there is one -I think: again well established- workaround:

Rewrite $f(x)$ as some function $g(x+x_0) - x_0$ such that now $g(x)$ has no constant term and then apply the above mentioned procedures (solving for iterative square root and the like) to $g(x)$.

Example: denote the iterative root of a some function $f(x)$ by $f^{[1/2]}(x)$ then solve

$$f^{[1/2]}(x) = g^{[1/2]}(x-x_0)+x_0$$

where you first must find $x_0$.

[update 2]I've tried to generate that power series for $g^{[1/2]}(x)$, which has then complex coefficients. I got $$ g^{[1/2]}(x) \approx \sqrt{2 x_0} x + (0.20412415 - 0.22379688 i) x^2 + (0.050024298 + 0.048042380 i) x^3 + (-0.022112213 + 0.028383580 i) x^4 + (-0.023623808 - 0.010393981 i) x^5 + (0.00074679924 - 0.021136421 i) x^6 + O(x^7)$$ where $f^{[1/2]}(x) = g^{[1/2]}(x-x_0) + x_0 $ and the fixpoint is $x_0 = \exp(i \pi /3 ) \approx 1/2 + 0.86602540 i $ and $\sqrt{2 x_0}=(1.2247449 + 0.70710678 i) $ . The range of convergence is small; however using Euler-summation (of complex orders) I could manage to reproduce $f(0)$ to $f(1)$ by applying two times the half-iterative to about 6 to 8 correct digits.[end update 2]For a

verybasic introduction you may try my small essay atand look for the article "continuous functional iteration".

## @Gottfried Helms 2013-08-04 20:30:46

(This is not an answer but a table of data after I've tried Anixx's first/accepted solution)I get for the first few approximations using Pari/GP

I don't see how this could be made convergent to some value..

## @Sheldon L 2013-08-10 01:46:52

around z=0, the functional square root is also an even function, with a leading constant coefficient, and x^2 as the first coefficient. I was able to generate a taylor series, but it has a limited radius of convergence.

## @Gottfried Helms 2013-08-10 05:12:22

@Sheldon: I would really like to see it. And how you did arrive at it.

## @Gottfried Helms 2013-08-10 06:09:26

Hmm, using Newton iteration to arrive at a squareroot of the carleman-matrix for $f(x)=1+x^2 \qquad (= g(g(x))) $ I could do at best with $g(x) \approx 0.642124541629 + 1.00681024077 x^2 - 0.515947721599 x^4 + 0.750971743189 x^6 + O(x^7) $ (I have a handful more coefficients, but only to at most 14 digits, only at even exponents , alternating in signs and also slowly diverging). Does it approximately match your result?

## @Gottfried Helms 2013-08-10 08:00:21

Hmm, instead of using 64x64 size for the Carleman matrices and 800 digits precision I took 128x128 and 1200 digits precision. This gives $ g(x) \approx 0.64209540 + 1.0068539 x^2 - 0.51369998 x^4 + 0.73159216 x^6 + O(x^7)$. It gives $g(0)$ being the constant and $g(g(0)) \approx 1$ to 10 digits precision. However - the change caused by bigger matrix-size occurs even in the fifth digit of the constant... So this left still much room for improvement...

## @Sheldon L 2013-08-10 20:41:27

I'm getting a different solution then you. This solution is based on the Kneser method for the superfunction of x^2+1 with two complex fixed points, $0.5+/-\sqrt{-0.75}$. $ht(z)=\alpha^{-1}(\alpha(z)+0.5)$ {ht= 0.642094752506609371698073 +x^ 2* 1.00690493722501474594184 +x^ 4* -0.514140931459684938879986 +x^ 6* 0.733283234297571957782760 +x^ 8* -1.32052177688948603556692 +x^10* 2.63888706127476651531197 +x^12* -5.60464285450601626636908 +x^14* 12.4064962313713263598877 +x^16* -28.3149176972867198028224 +x^18* 66.1664084652015917411194 +x^20* -157.552176646482538089039 }

## @Sheldon L 2013-08-10 21:01:15

Here is another equally valid solution for $ht(z)=\alpha^{-1}(\alpha(z)+0.5))$, iterations of reciprocal of $x^2+1$ put into correspondence with iterations of $x^2$ Bottcher super attracting fixed point of zero. {ht= 0.642094504390828381495363 +x^ 2* 1.00690224867415593906994 +x^ 4* -0.514130215253435435599237 +x^ 6* 0.733302763753911249332061 +x^ 8* -1.32058357980755641903265 +x^10* 2.63883845336820960564369 +x^12* -5.60443025341316030005301 +x^14* 12.4064479200198191890023 +x^16* -28.3152137792182421744708 +x^18* 66.1663983446023842196175 +x^20* -157.550867142011717456622 }

## @Gottfried Helms 2013-08-10 21:07:26

@Shel: very nice! Since you mentioned the small range of convergence: this can easily be expanded by Euler-summation. I've an example; but because the documentation needs some more characters I'll add another answer.

## @Anixx 2012-02-19 06:56:51

To get a closed-form solution (where possible) you can find a flow of f(x).

Let's w(x) a flow of f(x).

Then to find it we have to solve a difference equation (called Abel equation):

$$w(t+1)=f(w(t))$$

In our case it's

$$w(t+1)=1+w(t)^2$$

or in difference form,

$$\Delta w + w - w^2-1=0$$

Unfortunately this first-order difference equation is non-linear in our case.

Supposing you somehow solved it, you will get $w_C(t)$, a function depending on a constant parameter C. Then you take C=x and t=1/2 (for square iterative root), this will be the answer.

## @Baby Dragon 2013-08-04 18:45:09

It seems to me that you can simply show the existence of a solution of a difference equation. The question here,it seems, would be of uniqueness and continuity with respect to the initial condition.

## @Anixx 2012-02-19 06:32:01

Look also at this answer:

https://mathoverflow.net/questions/17605/how-to-solve-ffx-cosx/44727#44727

In short, the analytic solution is

$$f^{[1/2]}(x)=\phi(x)=\sum_{m=0}^{\infty} \binom {1/2}m \sum_{k=0}^m\binom mk(-1)^{m-k}f^{[k]}(x)$$

$$f^{[1/2]}(x)=\lim_{n\to\infty}\binom {1/2}n\sum_{k=0}^n\frac{1/2-n}{1/2-k}\binom nk(-1)^{n-k}f^{[k]}(x)$$

$$f^{[1/2]}(x)=\lim_{n\to\infty}\frac{\sum_{k=0}^{n} \frac{(-1)^k f^{[k]}(x)}{(1/2-k)k!(n-k)!}}{\sum_{k=0}^{n} \frac{(-1)^k }{(1/2-k) k!(n-k)!}}$$

The same way you can find not only square iterative root but iterative root of any power.

## @Gottfried Helms 2013-08-04 20:26:12

I don't see, how this can effectively be used. I tried it with $x=0.5$ for some small

n; "small" because you cannot iterate the function much without getting an overflow. See my answer for a short list of results...## @Christian Blatter 2010-11-03 19:26:25

Introduce a new coordinate system with a fixed point of $f$ as origin, e.g. the point $\omega:=e^{\pi i/3}$. Writing $x=\omega+\xi$ with a new independent variable $\xi$ one has $\phi(\omega+\xi)=\omega +\psi(\xi)$ for a new unknown function $\psi$ with $\psi(0)=0$. This function $\psi$ satisfies in a neighbourhood of $\xi=0$ the functional equation $\psi(\psi(\xi))=2\omega\xi+\xi^2$. Now you can recursively determine the Taylor coefficients of $\psi$. If you are lucky the resulting formal series actually defines an analytical function in a neighbourhood of $\xi=0$.

## @Gottfried Helms 2016-06-17 20:52:53

Unfortunately, the series has complex taylor-coefficients (see the coefficients in my answer) and the result for the halfiterate is also complex-valued even if the argument is real, for instance z=0.5. In the other answers (of Sh Levenstein) there it is proposed to apply a method derived from Kneser's real-valued solution for the half-iterate of the exp-function which might then be seen as more "friendly".