skip to content

Curve newton's method (EN)

By 0xstan & 0xmc

Reference and Innovation of Newton's Method Code

1. The necessity and importance of Newton method in Solidity&Vyper

Just as when the Internet went through web1.0 and web2.0, it was the user-facing application layer that ultimately captured the most value, the blockchain world is not short of increasingly homogeneous infrastructure, but of application layers with different functions. If web3.0 really does take off in the future, the driving factor of capturing value will shift from the current base layer to the application layer. The author believes that the current EVM performance bottleneck and calculation gas consumption, plus some language features of solidity and vyper, make many developers currently discouraged from many application functions, among which the imbalance between numerical calculation performance and resource consumption is a relatively troublesome problem faced by developers at present. Many of the "fast" and "saving" algorithms in python, C++, Javascript have to make various compromises when implemented by solidity and vyper. In order to achieve a specific function, many project managers even need to independently implement an algorithm function that is difficult to understand, and these realized algorithms are difficult to become a universal wheel for other developers to use directly, understanding these algorithms requires developers to work together. In order to achieve a specific function, many project parties even need to independently implement an algorithm function that is difficult to understand, and it is difficult for these implemented algorithms to become a universal wheel for other developers to use directly, understanding these algorithms requires developers to work together.

Curve is exactly the project manager mentioned above. It is a typical project led by a single developer with many original ideas and innovations in engineering implementation. The importance of Curve in the blockchain world needs no elaboration.

Here we salute Michael Egorov, the great founder of Curve, and thank him for the various innovations he has brought to the blockchain. In this paper, I would like to offer my own views and insights on a small point, with three purposes:

  • Elaborate the Newton algorithm in the Curve project
  • Deformation methods and ideas of Newton algorithm in different business scenarios.
  • Inspire the research of other numerical optimization algorithms

The third point is the most important thing in my opinion, for the following reasons :

Obviously, Newton's method is not the fastest and cheapest universal algorithm for numerical optimization. Just as no DEFI project can guarantee itself without any bugs, there is no numerical optimization algorithm in defi world that can guarantee it to be the "fastest" and the "cheapest" in the project. In this paper, we hope that subsequent researchers can continue in-depth research and discussion on the questions we raised, which are as follows (If possible, we hope the Curve Project can provide Grant support for the following questions) :

  • If the Newton method of Curve is not optimal, which algorithm is the fastest and most "economical" (save gas)?
  • Build solidity function wheels for Newton's method or a variant of Newton's method to meet the basic numerical calculation needs of most other developers (save developers' hair).

2. The code implementation idea of Newton's method in Curve V1

Comments and code are inconsistent, here is a link to my article:

2.1 Introduction of Newton's Method

Since the equation cannot be solved directly in the contract, Curve uses Newton's method to iteratively find an approximate solution in the contract. xn+1=xnf(xn)f(xn)x_{n+1}=x_n-\frac{f(x_n)}{f'(x_n)} A simple understanding of Newton's method is to use the above formula to continuously iterate the value of to make it closer and closer to the real solution.

2.2 Curve V1—— Derivation process using Newton's method and corresponding code of function get_D

The function f(D) of Newton's method used in the code is derived from the core formula. The derivation process is briefly described below:

  1. First transform the core formula into the form f(D) = 0

f(D)=AnnΣxi+DADnnDn+1nnΠxif(D)=An^n\Sigma x_i + D-ADn^n-\frac{D^{n+1}}{n^n\Pi x_i}

  1. D_new = D - f(D)/f'(D)

Dnew=DAnnΣxi+DADnnDn+1nnΠxi1Ann(n+1)DnnnΠxiD_{new}=D-\frac{An^n\Sigma x_i + D-ADn^n-\frac{D^{n+1}}{n^n\Pi x_i}}{1-An^n-(n+1)\frac{D^n}{n^n\Pi x_i}}

  1. Eventually becomes the form in the code

Dnew=AnnΣxiDn+1nnΠxiAnn+(n+1)DnnnΠxi1D_{new}=\frac{An^n\Sigma x_i-\frac{D^{n+1}}{n^n\Pi x_i}}{An^n+(n+1)\frac{D^n}{n^n\Pi x_i}-1}

  1. Use the for loop to iterate to find the solution of D. When the difference between D and the previous solution is less than or equal to 1, stop the iteration;

  2. Usually, the local optimal solution will be found within 4 rounds of iteration; however, if no suitable solution is found after 255 iterations, the program will terminate and the transaction will be rolled back (revert); users can use the remove_liquidity method to take back assets;

Note: There is one missing item in Newton's method formula in the comments, and the correct simplified formula is written in the code.

def _get_D(_xp: uint256[N_COINS], _amp: uint256) -> uint256:
    D invariant calculation in non-overflowing integer operations

    A * sum(x_i) * n**n + D = A * D * n**n + D**(n+1) / (n**n * prod(x_i))

    Converging solution:
    D[j+1] = (A * n**n * sum(x_i) - D[j]**(n+1) / (n**n prod(x_i))) / (A * n**n + (n+1)*D[j]**n/(n**n prod(x_i)) - 1)

    Here the original code comment denominator misses one item
    (n+1)*D[j]**n/(n**n prod(x_i))

    Link of Original code :
    S: uint256 = 0
    Dprev: uint256 = 0

    for _x in _xp:
        S += _x
    if S == 0:
        return 0

    D: uint256 = S
    Ann: uint256 = _amp * N_COINS
    for _i in range(255):
        D_P: uint256 = D
        for _x in _xp:
            D_P = D_P * D / (_x * N_COINS)  # If division by 0, this will be borked: only withdrawal will work. And that is good
            # Only when liquidity is removed is it possible that _x will be 0
            # at which point the program will crash, which is also the result we expect
            # The value of xp in the pool cannot be 0 when adding liquidity
        Dprev = D
        D = (Ann * S / A_PRECISION + D_P * N_COINS) * D / ((Ann - A_PRECISION) * D / A_PRECISION + (N_COINS + 1) * D_P)
        # Equality with the precision of 1
        # When the difference between the new value of the iteration and the previous value is less than or equal to 1, that is, 0 or 1
        # The local optimal solution is considered to be found
        if D > Dprev:
            if D - Dprev <= 1:
                return D
            if Dprev - D <= 1:
                return D
    # convergence typically occurs in 4 rounds or less, this should be unreachable!
    # if it does happen the pool is borked and LPs can withdraw via `remove_liquidity`

def _get_D_mem(_balances: uint256[N_COINS], _amp: uint256) -> uint256:
    return self._get_D(self._xp_mem(_balances), _amp)

2.3 Exploration, innovation, and conclusion of Curve Team's historical mistakes on get_D

Learn the right Newton's method from mistakes. When we refactored the vyper-based Curve with solidity, we found that the comments here are inconsistent with the source code (missing comments and errors in the Curve source code are relatively common).

The notes were first written by Sam Werner, a member of the Curve team, He is a PhD student in the Centre for Cryptocurrency Research and Engineering at Imperial. We thought at the time that Sam, who was also a PhD from Imperial College, might have proposed a better method than Curve CEO, so it was written in the comments. Therefore, we created a kind of Newton's method according to the method of Sam, which will not be further expanded due to the space limitation of this article.

According to Sam's commentary idea, we do not need to change the numerator of Newton's method. By scaling and simplifying the denominator, we change the slope of each tangent of Newton's method. In theory, we can eventually get the same numerical result.

The final result of the experiment is that changing the denominator, that is, the slope of the tangent line of Newton's method, will not shift the final result too much. But it will greatly affect the convergence of Newton's method, which is disastrous as we know that Newton's method in both Curve V1 and V2 requires a certain precision within 255 operations.

The convergence of Newton's method: The conclusion of the global convergence of Newton's method was not proven until 2018. Newton's method is quadratic convergence near the optimal point. The conclusion is that under Newton's method, the function only needs at most loglog(1/ε)log log (1/\varepsilon) steps to make the distance between the obtained optimal approximate solution and the actual optimal point less than ε\varepsilon . The common precision of Defi is 10^18, loglog(1/1018)log log (1/10^-18) the calculation shows that the result is about 4, which is also the basis of the author's theory that most of Newton's methods can get results in about 4 iterations.

The results of the experiment show that changing the slope of Newton's method will destroy the convergence rate of Newton's method, making Newton's method actually unavailable.

Newton's method has many following derived methods, including direct improvement of Newton's method (AN HN MN), damped Newton's method, and quasi-Newton's method (DFP, BFGS, L-BFGS). These algorithms are safer than directly scaling the derivatives in Newton's method. A direct scaling of Newton's method is seldomly verified.

We also considered partial scaling in Curve V2, and also found that the convergence speed of the results was not ideal.

All in all, it is true that Sam Werner here gave a not very good comment, but unexpectedly inspired us to have a deeper understanding and application of Newton's method.

2.4 Curve V1—— Derivation process using Newton's method and corresponding code of function get_y

Input asset i , quantity x after change, value _xp before change, calculate the quantity of asset j after change.

Like calculating D, Newton's method is also used here to calculate y, that is, x_j , and the derivation process of f(x_j) is as follows: Annxi+D=ADnn+Dn+1nnxiAn^n\sum x_i + D = ADn^n + \frac{D^{n+1}}{n^n\prod x_i}

  1. Let sum', prod' be the cumulative and cumulative results of excluding the number of output assets, respectively

    • sum' = sum(x) - x_j
    • prod' = prod(x) / x_j
  2. Divide the core formula by A*n**n

    xjxi+DAnnD=Dn+1Annnnxix_j\sum x_i+\frac{D}{An^n}-D=\frac{D^{n+1}}{An^nn^n\prod x_i}

  3. Multiply by x_j,and substitute sum' and prod'


  4. Expand the polynomial, which is thef(x_j)used by Newton's method


  5. Write it in the form of x_j**2 + b*x_j = c,substitute into the Newton's method formula x = x - f(x) / f'(x)

    • x_j = (x_j**2 + c) / (2*x_j + b)
def _get_y(i: int128, j: int128, x: uint256, _xp: uint256[N_COINS]) -> uint256:
    Calculate x[j] if one makes x[i] = x

    Done by solving quadratic equation iteratively.
    x_1**2 + x_1 * (sum' - (A*n**n - 1) * D / (A * n**n)) = D ** (n + 1) / (n ** (2 * n) * prod' * A)
    x_1**2 + b*x_1 = c

    x_1 = (x_1**2 + c) / (2*x_1 + b)
    # x in the input is converted to the same price/precision

    assert i != j       # dev: same coin
    assert j >= 0       # dev: j below zero
    assert j < N_COINS  # dev: j above N_COINS

    # should be unreachable, but good for safety
    assert i >= 0
    assert i < N_COINS

    A: uint256 = self._A()
    D: uint256 = self._get_D(_xp, A)
    Ann: uint256 = A * N_COINS
    c: uint256 = D
    S: uint256 = 0
    _x: uint256 = 0
    y_prev: uint256 = 0

    for _i in range(N_COINS):
        if _i == i:
            _x = x
        elif _i != j:
            _x = _xp[_i]
        S += _x
        c = c * D / (_x * N_COINS)
    c = c * D * A_PRECISION / (Ann * N_COINS)
    b: uint256 = S + D * A_PRECISION / Ann  # - D
    y: uint256 = D
    for _i in range(255):
        y_prev = y
        y = (y*y + c) / (2 * y + b - D)
        # Equality with the precision of 1
        if y > y_prev:
            if y - y_prev <= 1:
                return y
            if y_prev - y <= 1:
                return y

def get_dy(i: int128, j: int128, _dx: uint256) -> uint256:
    xp: uint256[N_COINS] = self._xp()
    rates: uint256[N_COINS] = RATES

    x: uint256 = xp[i] + (_dx * rates[i] / PRECISION)
    y: uint256 = self._get_y(i, j, x, xp)
    dy: uint256 = xp[j] - y - 1
    fee: uint256 = self.fee * dy / FEE_DENOMINATOR
    return (dy - fee) * PRECISION / rates[j]

There is also a _get_y_D() function, the difference from the above is that you can customize the value of D to find y.

def _get_y_D(A: uint256, i: int128, _xp: uint256[N_COINS], D: uint256) -> uint256:

3. The code implementation idea of Newton's method in Curve V2

The Newton method in Curve V2 has certain deformations, and it is relatively difficult to fully understand under the premise of insufficient annotation. Here we ignore the technical details when we introduce Newton's method. Here we look directly from the author's point of view, how he converts mathematical ideas into code step by step, hoping to inspire future generations to design more complex algorithms. Here for the convenience of better understanding, the code we show is the code in the Python version of the test file.

3.1 Curve V2—Derivation process using Newton's method and corresponding code of function geometric_mean

Here we input array x to find its geometric mean as D. We denote D and N in the code as n and d, as shown in the formula:

d=(i=0n1x(i))1/nd=\left(\prod _{i=0}^{n-1} x(i)\right){}^{1/n}

d is the solution target of Newton's method. In order to obtain a better derivation of the constructed function, both sides of the equation are enlarged to the nth power at the same time to obtain

dn=i=0n1x(i)d^n=\prod _{i=0}^{n-1} x(i)

Move the right side of the equation to the left to construct the function f(d)

f(d)=dni=0n1x(i)f(d)=d^n-\prod _{i=0}^{n-1} x(i)

Take derivative of f(x)

f(d)=ndn1f'(d)=n d^{n-1}

According to Newton's method, d_new= d_prev-f(d_prev)/f'(d_prev)

dnew=ddni=0n1x(i)ndn1d_{\text{new}}=d-\frac{d^n-\prod _{i=0}^{n-1} x(i)}{n d^{n-1}}

Remove the (dn1)(d^{n-1}) in the denominator, we get

dnew=ddi=0n1x(i)dn1nd_{\text{new}}=d-\frac{d-\frac{\prod _{i=0}^{n-1} x(i)}{d^{n-1}}}{n}

Next, we change the equation into the form of numerator/denominator, so that it is convenient to simplify in the numerator first, and the principle of first multiplication and then division is beneficial to the control the accuracy of vyper&solidity.

dnew=ndd+i=0n1x(i)dn1nd_{\text{new}}=\frac{nd-d+\frac{\prod _{i=0}^{n-1} x(i)}{d^{n-1}}}{n}

i=0n1x(i)dn1\frac{\prod _{i=0}^{n-1} x(i)}{d^{n-1}} appears, the numerator is the n th item of d, and the denominator is the n-1 th item of d. In order to make the numerator and denominator both n items so that the two can be iterated at the same time when the code is iterated, we naturally thought of here. The function becomes the same n times, multiply the numerator and denominator by d at the same time

dnew=ndd+d(i=0n1x(i))dnnd_{\text{new}}=\frac{nd-d+\frac{d \left(\prod _{i=0}^{n-1} x(i)\right)}{d^n}}{n}

then simplify numerator and extract the common factor d, simplification like that can optimize the computational cost.

dnew=d((n1)+i=0n1x(i)dn)nd_{\text{new}}=\frac{d \left((n-1)+\frac{\prod _{i=0}^{n-1} x(i)}{d^n}\right)}{n}

So far we see in the code

D = D * ((N - 1) * 10 ** 18 + tmp) // (N * 10**18)

Now it has come to light. Among them, tmp is exactly the i=0n1x(i)dn\frac{\prod _{i=0}^{n-1} x(i)}{d^{n}} that we considered when deriving the deformation, which just corresponds to the tmp obtained by performing n times of for loops. After adding the precision control, it is exactly the same as the author's code.

def geometric_mean(x):
    N = len(x)
    x = sorted(x, reverse=True)  # Presort - good for convergence
    D = x[0]
    for i in range(255):
        D_prev = D
        tmp = 10 ** 18
        for _x in x:
            tmp = tmp * _x // D
        D = D * ((N - 1) * 10**18 + tmp) // (N * 10**18)
        diff = abs(D - D_prev)
        if diff <= 1 or diff * 10**18 < D:
            return D
    raise ValueError("Did not converge")

3.2 Curve V2—Derivation process using Newton's method and corresponding code of function newton_D

The formulas in the white paper are as follows, the initial value of Newton's method is D0

Equilibrium equation:KDN1xi+xi=KDN+(DN)NKD^{N-1}{\Large\sum}{x_i}+{\Large\prod} x_i=KD^N+\left(\dfrac{D}{N}\right)^N

K0=xiNNDNK_0 = \dfrac{\prod x_iN^N}{D^N} , K=AK0γ2(γ+1K0)2K=AK_0\dfrac{\gamma^2}{(\gamma+1-K_0)^2}



xi,0=DN1kixkNN1x_{i,0}=\dfrac{D^{N-1}}{\prod_{k \neq i}x_kN^{N-1}}

How did the author think of Newton's method? Why do intermediate variables such as mul1 and mul2 exist? We replace D with d and deduce according to the author's idea: First, fully expand the equilibrium equation of Curve V2

f(d)=Aγ2nni=1nx(i)(nndn(i=1nx(i))+γ+1)2+Aγ2nn(i=1nx(i))i=1nx(i)d(nndn(i=1nx(i))+γ+1)2+i=1nx(i)(dn)nf(d)=-\frac{A \gamma ^2 n^n \prod _{i=1}^n x(i)}{\left(-n^n d^{-n} \left(\prod _{i=1}^n x(i)\right)+\gamma +1\right){}^2}+\frac{A \gamma ^2 n^n \left(\prod _{i=1}^n x(i)\right) \sum _{i=1}^n x(i)}{d \left(-n^n d^{-n} \left(\prod _{i=1}^n x(i)\right)+\gamma +1\right){}^2}+\prod _{i=1}^n x(i)-\left(\frac{d}{n}\right)^n

Derivation of the above formula with respect to d :

f(d)=2Aγ2n2n+1dn1(i=1nx(i))2(nndn(i=1nx(i))+γ+1)32Aγ2n2n+1dn2(i=1nx(i))2(i=1nx(i))(nndn(i=1nx(i))+γ+1)3Aγ2nn(i=1nx(i))i=1nx(i)d2(nndn(i=1nx(i))+γ+1)2(dn)n1f'(d)=\frac{2 A \gamma ^2 n^{2 n+1} d^{-n-1} \left(\prod _{i=1}^n x(i)\right){}^2}{\left(-n^n d^{-n} \left(\prod _{i=1}^n x(i)\right)+\gamma +1\right){}^3}-\frac{2 A \gamma ^2 n^{2 n+1} d^{-n-2} \left(\prod _{i=1}^n x(i)\right){}^2 \left(\sum _{i=1}^n x(i)\right)}{\left(-n^n d^{-n} \left(\prod _{i=1}^n x(i)\right)+\gamma +1\right){}^3}-\frac{A \gamma ^2 n^n \left(\prod _{i=1}^n x(i)\right) \sum _{i=1}^n x(i)}{d^2 \left(-n^n d^{-n} \left(\prod _{i=1}^n x(i)\right)+\gamma +1\right){}^2}-\left(\frac{d}{n}\right)^{n-1}

According to Newton's method:


Expand the formula and get :

dnew=dAγ2nni=1nx(i)(nndn(i=1nx(i))+γ+1)2+Aγ2nn(i=1nx(i))i=1nx(i)d(nndn(i=1nx(i))+γ+1)2+i=1nx(i)(dn)n2Aγ2n2n+1dn1(i=1nx(i))2(nndn(i=1nx(i))+γ+1)32Aγ2n2n+1dn2(i=1nx(i))2(i=1nx(i))(nndn(i=1nx(i))+γ+1)3Aγ2nn(i=1nx(i))i=1nx(i)d2(nndn(i=1nx(i))+γ+1)2(dn)n1d_{new}=d-\frac{-\frac{A \gamma ^2 n^n \prod _{i=1}^n x(i)}{\left(-n^n d^{-n} \left(\prod _{i=1}^n x(i)\right)+\gamma +1\right){}^2}+\frac{A \gamma ^2 n^n \left(\prod _{i=1}^n x(i)\right) \sum _{i=1}^n x(i)}{d \left(-n^n d^{-n} \left(\prod _{i=1}^n x(i)\right)+\gamma +1\right){}^2}+\prod _{i=1}^n x(i)-\left(\frac{d}{n}\right)^n}{\frac{2 A \gamma ^2 n^{2 n+1} d^{-n-1} \left(\prod _{i=1}^n x(i)\right){}^2}{\left(-n^n d^{-n} \left(\prod _{i=1}^n x(i)\right)+\gamma +1\right){}^3}-\frac{2 A \gamma ^2 n^{2 n+1} d^{-n-2} \left(\prod _{i=1}^n x(i)\right){}^2 \left(\sum _{i=1}^n x(i)\right)}{\left(-n^n d^{-n} \left(\prod _{i=1}^n x(i)\right)+\gamma +1\right){}^3}-\frac{A \gamma ^2 n^n \left(\prod _{i=1}^n x(i)\right) \sum _{i=1}^n x(i)}{d^2 \left(-n^n d^{-n} \left(\prod _{i=1}^n x(i)\right)+\gamma +1\right){}^2}-\left(\frac{d}{n}\right)^{n-1}}

The denominator that appears more often is nndn(i=1nx(i))+γ+1-n^n d^{-n} \left(\prod _{i=1}^n x(i)\right)+\gamma +1 , which is the expansion of γ+1K0\gamma+1-K_0 in the equilibrium equation. For the convenience of observation and simplification, we denote γ+1K0\gamma+1-K_0 as g1k0, and the equation after replacement becomes :

dAγ2nn(i=1nx(i))i=1nx(i)d g1k02Aγ2nni=1nx(i)g1k02+i=1nx(i)(dn)n2Aγ2n2n+1dn1(i=1nx(i))2g1k032Aγ2n2n+1dn2(i=1nx(i))2(i=1nx(i))g1k03Aγ2nn(i=1nx(i))i=1nx(i)d2g1k02(dn)n1d-\frac{\frac{A \gamma ^2 n^n \left(\prod _{i=1}^n x(i)\right) \sum _{i=1}^n x(i)}{d\ \text{g1k0}^2}-\frac{A \gamma ^2 n^n \prod _{i=1}^n x(i)}{\text{g1k0}^2}+\prod _{i=1}^n x(i)-\left(\frac{d}{n}\right)^n}{\frac{2 A \gamma ^2 n^{2 n+1} d^{-n-1} \left(\prod _{i=1}^n x(i)\right){}^2}{\text{g1k0}^3}-\frac{2 A \gamma ^2 n^{2 n+1} d^{-n-2} \left(\prod _{i=1}^n x(i)\right){}^2 \left(\sum _{i=1}^n x(i)\right)}{\text{g1k0}^3}-\frac{A \gamma ^2 n^n \left(\prod _{i=1}^n x(i)\right) \sum _{i=1}^n x(i)}{d^2 \text{g1k0}^2}-\left(\frac{d}{n}\right)^{n-1}}

At the same time, i=1nx(i)\prod _{i=1}^n x(i) is abbreviated as Prod, and i=1nx(i)\sum _{i=1}^n x(i) is abbreviated as S. The above equation is simplified to :

dAγ2nnProdSdg1k02Aγ2nnProdg1k02(dn)n+Prod2Aγ2n2n+1Prod2dn1g1k032Aγ2n2n+1Prod2Sdn2g1k03Aγ2nnProdSd2g1k02(dn)n1d-\frac{\frac{A \gamma ^2 n^n \text{Prod} S}{d \text{g1k0}^2}-\frac{A \gamma ^2 n^n \text{Prod}}{\text{g1k0}^2}-\left(\frac{d}{n}\right)^n+\text{Prod}}{\frac{2 A \gamma ^2 n^{2 n+1} \text{Prod}^2 d^{-n-1}}{\text{g1k0}^3}-\frac{2 A \gamma ^2 n^{2 n+1} \text{Prod}^2 S d^{-n-2}}{\text{g1k0}^3}-\frac{A \gamma ^2 n^n \text{Prod} S}{d^2 \text{g1k0}^2}-\left(\frac{d}{n}\right)^{n-1}}

Denote the numerator and denominator as:

numerator=Aγ2nnProdSdg1k02Aγ2nnProdg1k02(dn)n+Prod\text{numerator}=\frac{A \gamma ^2 n^n \text{Prod} S}{d \text{g1k0}^2}-\frac{A \gamma ^2 n^n \text{Prod}}{\text{g1k0}^2}-\left(\frac{d}{n}\right)^n+\text{Prod}

denominator=2Aγ2n2n+1Prod2dn1g1k032Aγ2n2n+1Prod2Sdn2g1k03Aγ2nnProdSd2g1k02(dn)n1\text{denominator}=\frac{2 A \gamma ^2 n^{2 n+1} \text{Prod}^2 d^{-n-1}}{\text{g1k0}^3}-\frac{2 A \gamma ^2 n^{2 n+1} \text{Prod}^2 S d^{-n-2}}{\text{g1k0}^3}-\frac{A \gamma ^2 n^n \text{Prod} S}{d^2 \text{g1k0}^2}-\left(\frac{d}{n}\right)^{n-1}

Next, we observe and analyze the complex formula, and we can see that most of the terms in the numerator and denominator contain Aγ2nnProdg1k02\frac{A \gamma ^2 n^n \text{Prod}}{\text{g1k0}^2} as a common factor. And K=Aγ2nnProddng1k02K=\frac{A \gamma ^2 n^n \text{Prod} d^{-n}}{\text{g1k0}^2} in the equilibrium equation can be seen to be very similar to this factor. And K itself contains K0, and K0 contains N cycles and contains D. Therefore, forcing K as a new common factor can be obtained :

denominator=Aγ2nndnProdg1k02(g1k02n1ndn1Aγ2nnProd2nn+1ProdSdn2g1k0+2nn+1Proddn1g1k0Sd2)denominator= \frac{A \gamma ^2 n^n d^{-n} \text{Prod}}{\text{g1k0}^2}*(-\frac{\text{g1k0}^2 n^{1-n} d^{n-1}}{A \gamma ^2 n^n \text{Prod}}-\frac{2 n^{n+1} \text{Prod} S d^{-n-2}}{\text{g1k0}}+\frac{2 n^{n+1} \text{Prod} d^{-n-1}}{\text{g1k0}}-\frac{S}{d^2}) We found that the factor Prodnndn=K0\text{Prod}*n^n d^{-n}=K0 appears in the denominator, and we will simplify the conversion of the similar K0 structural formula in the denominator to K0:

denominator=K(g1k02nd(Aγ2K0nn)2K0nSd2g1k0Sd2+2K0ndg1k0)denominator= K*(-\frac{\text{g1k0}^2 n}{d \left(A \gamma ^2 \text{K0} n^n\right)}-\frac{2 \text{K0} n S}{d^2 \text{g1k0}}-\frac{S}{d^2}+\frac{2 \text{K0} n}{d \text{g1k0}})

Then we observe that the denominator contains dd ,and we consider eliminating the denominator. Also notice that AA and nnn^n always appear at the same time, and merge AnnAn^n as ANN. Get the following formula:

denominator=Kd2(dg1k02nANNγ2K0+2dK0ng1k02K0nSg1k0S)denominator= Kd^2*(-\frac{d \text{g1k0}^2 n}{ANN \gamma ^2 \text{K0} }+\frac{2 d \text{K0} n}{\text{g1k0}}-\frac{2 \text{K0} n S}{\text{g1k0}}-S)

Observe again, we find that the formula contains a lot of 2K0ng1k0\frac{2 \text{K0} n }{\text{g1k0}} , we mark it as mul2, and at the same time mark the part of the first more complex formula that does not involve K0 and for loop as mul1=g1k02ANNγ2mul1=\frac{\text{g1k0}^2}{\text{ANN} \gamma ^2}

denominator=Kd2(S+Smul2+mul1nK0mul2d)denominator=-Kd^2*({S+S*mul2+mul1} \frac{n}{K0} - {mul2}*d)

Denote -denominator/Kd^2 as neg_prime In the same way, the molecule can be deformed by Kd^2, and similar results can be obtained by substituting it into the following formula :

dnew=df(d)/f(d)d_{new}=d-f(d)/f'(d) dnew=(dneg_fprime+dSd2)/neg_fprimed(mul1/neg_fprime)(1K0)/K0d_{new} = (d * \text{neg\_fprime} + d * S - d^2)/\text{neg\_fprime} - d * ({mul1}/\text{neg\_fprime}) * (1-K0)/ K0

Got the exact same core Newton's formula as in the code.

def newton_D(A, gamma, x, D0):
    D = D0
    i = 0

    S = sum(x)
    x = sorted(x, reverse=True)
    N = len(x)

    for i in range(255):
        D_prev = D

        K0 = 10**18
        for _x in x:
            K0 = K0 * _x * N // D

        _g1k0 = abs(gamma + 10**18 - K0)

        # D / (A * N**N) * _g1k0**2 / gamma**2
        mul1 = 10**18 * D // gamma * _g1k0 // gamma * _g1k0 * A_MULTIPLIER // A

        # 2*N*K0 / _g1k0
        mul2 = (2 * 10**18) * N * K0 // _g1k0

        neg_fprime = (S + S * mul2 // 10**18) + mul1 * N // K0 - mul2 * D // 10**18
        assert neg_fprime > 0  # Python only: -f' > 0

        # D -= f / fprime
        D = (D * neg_fprime + D * S - D**2) // neg_fprime - D * (mul1 // neg_fprime) // 10**18 * (10**18 - K0) // K0

        if D < 0:
            D = -D // 2
        if abs(D - D_prev) <= max(100, D // 10**14):
            return D

    raise ValueError("Did not converge")

3.3 Curve V2—Derivation process using Newton's method and corresponding code of function newton_y

There is an error about the initial value in the Curve V2 white paper. We can set the initial value of Newton's method iteration of D and x_i respectively as follows to find the solution faster



The definition of the initial value in the white paper is wrong. The power of D in the numerator is written as N-1, and the power of N in the denominator is written as N-1, which should all be N, we corrected them in the codex_i,0

Wrong definition of the initial value in the white paper : xi,0=DN1kixkNN1x_{i,0}=\frac{D^{N-1}}{\prod_{k \neq i}{x_k}N^{N-1}}

How did the author come up with Newton's method, and why do intermediate variables such as mul1 and mul2 exist? We continue to follow the author's ideas and deduce : In the same way, the equilibrium equation of Curve V2 is fully expanded. For the convenience of writing, we assume that x(n) is unknown and wait for the newton_y method to solve it. We expand the equilibrium equation and simplify the analysis as a function of x(n)x(n) wait

f(x(n))=Aγ2nnx(n)k=1n1x(k)(nndnx(n)(k=1n1x(k))+γ+1)2+Aγ2nn(x(n)k=1n1x(k))(k=1n1x(k)+x(n))d(nndnx(n)(k=1n1x(k))+γ+1)2+x(n)(k=1n1x(k))(dn)nf(x(n))=-\frac{A \gamma ^2 n^n x(n) \prod _{k=1}^{n-1} x(k)}{\left(-n^n d^{-n} x(n) \left(\prod _{k=1}^{n-1} x(k)\right)+\gamma +1\right){}^2}+\frac{A \gamma ^2 n^n \left(x(n) \prod _{k=1}^{n-1} x(k)\right) \left(\sum _{k=1}^{n-1} x(k)+x(n)\right)}{d \left(-n^n d^{-n} x(n) \left(\prod _{k=1}^{n-1} x(k)\right)+\gamma +1\right){}^2}+x(n) \left(\prod _{k=1}^{n-1} x(k)\right)-\left(\frac{d}{n}\right)^n

Derive f(x(n))f(x(n)) to get :

f(x(n))=Aγ2nnk=1n1x(k)(nndnx(n)(k=1n1x(k))+γ+1)2+Aγ2nnx(n)(k=1n1x(k))d(nndnx(n)(k=1n1x(k))+γ+1)2+Aγ2nn(k=1n1x(k))(k=1n1x(k)+x(n))d(nndnx(n)(k=1n1x(k))+γ+1)22Aγ2n2ndnx(n)(k=1n1x(k))2(nndnx(n)(k=1n1x(k))+γ+1)3+2Aγ2n2ndn1x(n)(k=1n1x(k))2(k=1n1x(k)+x(n))(nndnx(n)(k=1n1x(k))+γ+1)3+k=1n1x(k)f'(x(n))= -\frac{A \gamma ^2 n^n \prod _{k=1}^{n-1} x(k)}{\left(-n^n d^{-n} x(n) \left(\prod _{k=1}^{n-1} x(k)\right)+\gamma +1\right){}^2}+\frac{A \gamma ^2 n^n x(n) \left(\prod _{k=1}^{n-1} x(k)\right)}{d \left(-n^n d^{-n} x(n) \left(\prod _{k=1}^{n-1} x(k)\right)+\gamma +1\right){}^2}+\frac{A \gamma ^2 n^n \left(\prod _{k=1}^{n-1} x(k)\right) \left(\sum _{k=1}^{n-1} x(k)+x(n)\right)}{d \left(-n^n d^{-n} x(n) \left(\prod _{k=1}^{n-1} x(k)\right)+\gamma +1\right){}^2}-\frac{2 A \gamma ^2 n^{2 n} d^{-n} x(n) \left(\prod _{k=1}^{n-1} x(k)\right){}^2}{\left(-n^n d^{-n} x(n) \left(\prod _{k=1}^{n-1} x(k)\right)+\gamma +1\right){}^3}+\frac{2 A \gamma ^2 n^{2 n} d^{-n-1} x(n) \left(\prod _{k=1}^{n-1} x(k)\right){}^2 \left(\sum _{k=1}^{n-1} x(k)+x(n)\right)}{\left(-n^n d^{-n} x(n) \left(\prod _{k=1}^{n-1} x(k)\right)+\gamma +1\right){}^3}+\prod _{k=1}^{n-1} x(k)

Similar to newto`n_D, we also found that making a replacement can greatly improve the readability through observation.

denote nndnx(n)(k=1n1x(k))+γ+1-n^n d^{-n} x(n) \left(\prod _{k=1}^{n-1} x(k)\right)+\gamma +1 as g1k0

denote AnnA*n^n as ANN

denote k=1n1x(k)\sum _{k=1}^{n-1} x(k) as S

Simplify as :

f(x(n))=2ANNγ2nndnx(n)(k=1n1x(k))2g1k03+2ANNγ2nndn1x(n)(k=1n1x(k))2(k=1n1x(k)+x(n))g1k03+ANNγ2x(n)(k=1n1x(k))dg1k02+ANNγ2(k=1n1x(k))(k=1n1x(k)+x(n))dg1k02ANNγ2k=1n1x(k)g1k02+k=1n1x(k)f'(x(n))=-\frac{2 \text{ANN} \gamma ^2 n^n d^{-n} x(n) \left(\prod _{k=1}^{n-1} x(k)\right){}^2}{\text{g1k0}^3}+\frac{2 \text{ANN} \gamma ^2 n^n d^{-n-1} x(n) \left(\prod _{k=1}^{n-1} x(k)\right){}^2 \left(\sum _{k=1}^{n-1} x(k)+x(n)\right)}{\text{g1k0}^3}+\frac{\text{ANN} \gamma ^2 x(n) \left(\prod _{k=1}^{n-1} x(k)\right)}{d \text{g1k0}^2}+\frac{\text{ANN} \gamma ^2 \left(\prod _{k=1}^{n-1} x(k)\right) \left(\sum _{k=1}^{n-1} x(k)+x(n)\right)}{d \text{g1k0}^2}-\frac{\text{ANN} \gamma ^2 \prod _{k=1}^{n-1} x(k)}{\text{g1k0}^2}+\prod _{k=1}^{n-1} x(k)

Similar to the newton_D function, we notice K through observation, and denote k=1n1x(k)\prod _{k=1}^{n-1} x(k) as Prod_iProd\_i to obtain the following formula :

f(x(n))=K(g1k02nndnANNγ2x(n)+(nnS_idnx(n)+2nndndnndnx(n)+2Prod_iSdg1k02Prod_ig1k0))f'(x(n))=K \left(\frac{\text{g1k0}^2 n^{-n} d^n}{\text{ANN} \gamma ^2 x(n)}+\left(\frac{\frac{n^{-n} \text{S\_i} d^n}{x(n)}+2 n^{-n} d^n}{d}-\frac{n^{-n} d^n}{x(n)}+\frac{2 \text{Prod\_i} S}{d \text{g1k0}}-\frac{2 \text{Prod\_i}}{\text{g1k0}}\right)\right)

We noticed that Prod_iProd\_i often appears in the molecule, and Prod_ix(n)=ProdProd\_i*x(n)=Prod ,so we deform the equation by 1x(n)\frac{1}{x(n)} for simplification to get :

f(x(n))=Kx(n)(g1k02nndnANNγ2+nnSdn+2nndnx(n)dnndnx(n)d+nn(dn)+2ProdSdg1k02Prodg1k0)f'(x(n))=\frac{K}{x(n)}(\frac{\text{g1k0}^2 n^{-n} d^n}{\text{ANN} \gamma ^2}+\frac{n^{-n} S d^n+2 n^{-n} d^n x(n)}{d}-\frac{n^{-n} d^n x(n)}{d}+n^{-n} \left(-d^n\right)+\frac{2 \text{Prod} S}{d \text{g1k0}}-\frac{2 \text{Prod}}{\text{g1k0}})

Continuing to observe, we can see that the above formula contains many dnnnd^nn^{-n} the following formula is obtained after deformation:

f(x(n))=(Knndn)x(n)(g1k02ANNγ2+2K0Sdg1k0+x(n)+Sd2K0g1k01)f'(x(n))=\frac{\left(K n^{-n} d^n\right)}{x(n)} \left(\frac{\text{g1k0}^2}{\text{ANN} \gamma ^2}+\frac{2 \text{K0} S}{d \text{g1k0}}+\frac{x(n)+S}{d}-\frac{2 \text{K0}}{\text{g1k0}}-1\right)

The denominator contains many d, we can move 1/d1/d outside the parentheses to get :

f(x(n))=(Knndn1)x(n)(g1k02dANNγ2+S+2K0Sg1k0+x(n)2K0dg1k0d)f'(x(n))=\frac{\left(K n^{-n} d^{n-1}\right)}{x(n)} \left(\frac{\text{g1k0}^2 d}{\text{ANN} \gamma ^2}+S+\frac{2 \text{K0} S}{ \text{g1k0}}+x(n)-\frac{2 \text{K0}d}{\text{g1k0}}-d\right)

Observe again, we find that the formula contains a lot of S, and the coefficients are combined to get mul2=1+2K0g1k0mul2=1+ \frac{2 \text{K0} }{\text{g1k0}} , we mark it as mul2, and at the same time mark the part of the first more complex formula that does not involve K0 and for loop as mul1=dg1k02ANNγ2mul1=d*\frac{\text{g1k0}^2}{\text{ANN} \gamma ^2}

denote x(n) as the y to be solved for newton_y

f(x(n))=(Knndn)y((y+Smul2+mul1Dmul2))f(x(n))=\frac{\left(K n^{-n} d^n\right)}{y}*( (y + S * mul2 + mul1 - D * mul2)) Denote the part in parentheses in the above formula as yfprime

yfprime=(y+Smul2+mul1Dmul2))\text{yfprime}=(y + S * mul2 + mul1 - D * mul2))


We can see that it is consistent with the core iteration in the code

In the same way, the molecule can be deformed by Kd^2, and similar results can be obtained by substituting it into the following formula :


ynew=(yfprime+DS)/fprime+mul1/fprime(1K0)/K0y_{new} = (yfprime + D - S) / fprime + mul1 / fprime * (1 - K0) / K0

Got the exact same core Newton's formula as in the code.

def newton_y(A, gamma, x, D, i):
    N = len(x)

    y = D // N
    K0_i = 10**18
    S_i = 0
    x_sorted = sorted(_x for j, _x in enumerate(x) if j != i)
    convergence_limit = max(max(x_sorted) // 10**14, D // 10**14, 100)
    for _x in x_sorted:
        y = y * D // (_x * N)  # Small _x first
        S_i += _x
    for _x in x_sorted[::-1]:
        K0_i = K0_i * _x * N // D  # Large _x first

    for j in range(255):
        y_prev = y

        K0 = K0_i * y * N // D
        S = S_i + y

        _g1k0 = abs(gamma + 10**18 - K0)

        # D / (A * N**N) * _g1k0**2 / gamma**2
        mul1 = 10**18 * D // gamma * _g1k0 // gamma * _g1k0 * A_MULTIPLIER // A

        # 2*K0 / _g1k0
        mul2 = 10**18 + (2 * 10**18) * K0 // _g1k0

        yfprime = (10**18 * y + S * mul2 + mul1 - D * mul2)
        fprime = yfprime // y
        assert fprime > 0  # Python only: f' > 0

        # y -= f / f_prime;  y = (y * fprime - f) / fprime
        y = (yfprime + 10**18 * D - 10**18 * S) // fprime + mul1 // fprime * (10**18 - K0) // K0

        if j > 100:  # Just logging when doesn't converge
            print(j, y, D, x)
        if y < 0 or fprime < 0:
            y = y_prev // 2
        if abs(y - y_prev) <= max(convergence_limit, y // 10**14):
            return y

    raise Exception("Did not converge")

4. Applicability of Newton's method and summary of code ideas.

4.1 The applicability of Newton's method in Defi.

Newton's method is a panacea algorithm. Under the condition that a good initial value can be given, it can solve most of the equation evaluation problems without analytical solutions within 10 times. Therefore, it is suitable for most of the more complex Defi numerical algorithms, and we also believe that this algorithm will come into fashion and be used by more and more people.

Popularizing Newton's method is also popularizing for Curve V1&V2 at the same time, which is also the original intention of our research. In the above article, we have seen that compared with Newton's method in the textbook, employment of Newton's method in practical business has a higher threshold.

4.2 Summary of the application of Newton's method in vyper&solidity.

  • Decide who to simplify and propose as the sum of the common factors, and let who becomes the intermediate variable generated by the priority loop first : Some intermediate variables in the code implementation of Newton's method are generated by looping N times. These variables are often considered in combination with the more variable forms that appear during mathematical simplification. For example, in the function newton_D, we found that K0 and K have a high occurrence rate, so in the design of intermediate variables, K0 or K should be given priority instead of nnn^n or dnd^n contained in K0K0.
  • The principle of multiplying before dividing Minimize the appearance of division in the form of numerator/denominator and reduce the fraction to a common denominator to make it a multiplication operation. Pay attention to EVM numerical overflow when doing multiplication(consult us if there is a numerical overflow).
  • The derivation will reduce the accuracy of some formulas It is necessary to manually change the precision of the code to meet the premise of consistent precision. Only the same precision can get the same result as in the mathematical formula.
  • Appropriately replace frequently occurring forms with simple symbols.
  • Prevent numerical overflow and the principle of first large then small Executing Newton's method in the EVM needs to match the appropriate division control precision balance to the data to be performed in the continuous multiplication loop to avoid numerical overflow. At the same time, note that the division of large numbers is performed first and then the division of decimals is performed, that is, the data needs to be rearranged from large to small.

4.3 Difficulties in understanding the code of Curve V2

The biggest difficulty in understanding Curve V2 compared to Uniswap V3 project distance is :

  1. Curve is more inclined to mathematical optimization, and many Defi code logics are original in the digital token community.
  2. More than 95% of Curve V2's code is contributed by Curve CEO Michael, with few comments, and the comments given by others have some errors.
  3. Michael's misrepresentation of some key points in the white paper makes it difficult to understand.
  4. Compared with Uniswap, Curve lacks other people's research as references.

4.4 Other algorithms of Curve and precision control problems

Due to space limitations, this paper does not present a particularly complete description of Newton's method used in Curve. Another difficulty in constructing Newton's method in actual business code is the problem of precision control, which is worth summarizing in another detailed article.

At the same time, there are other algorithms lacking annotations in the math library of Curve V2 that are difficult to understand. For example, the halfpow function is actually an application of the binomial algorithm in the non-integer field.

We have fully understood the mathematical algorithms and specific codes in Curve V1&V2. At the same time, we also gained a complete understanding of the business code of Curve V1&V2.

At present, 0xstan has completed the solidity-based reconstruction of the core contract of Curve V1.

At present, the Code Review section of Curve V1 and Curve V2 has been put on the official website of and