This is completely generated with AI to explain EML function

The power of the EML function $\operatorname{eml}(x, y) = e^x - \ln(y)$ lies in its ability to isolate the core operations of arithmetic and transcendental math through recursive nesting.

If we treat $\operatorname{eml}(x, y)$ as a gate, and use the constant $1$ as our "high signal," we can "peel" the exponential and logarithmic layers away to recover everything else.

1. Recovering the "Identity" Functions

To build anything, you first need to extract the raw $e^x$ and $\ln(x)$ functions from the EML gate.

  • To get $e^x$: Set $y = 1$. Since $\ln(1) = 0$, we get:
  • To get $\ln(x)$: This is more clever. We use the result of the first step. If we nest EML within itself, we can cancel the exponential.

2. Building Arithmetic (The "NAND" Logic)

Once you have $e^x$ and $\ln(x)$, you can perform multiplication and division using the standard logarithmic identities:

  • Multiplication: $x \cdot y = e^{\ln(x) + \ln(y)}$
  • Division: $x / y = e^{\ln(x) - \ln(y)}$

Because EML naturally contains a subtraction operator ($-$), it acts as the bridge. By feeding logarithms into the $x$ and $y$ slots of the EML function, you can create a "mixer" that outputs sums or products of the inputs.

3. Transcendental and Complex Functions

The paper suggests that once you have basic arithmetic and the exponential function, you can reach the rest of the mathematical "library" via Euler's Formula:

$$e^{ix} = \cos(x) + i\sin(x)$$

By allowing the inputs to be complex, the EML gate can generate circles (trigonometry), hyperbolic functions ($\sinh, \cosh$), and even pi ($\pi$). Essentially, any function that can be written as a Taylor series can be approximated or exactly represented by a specific "circuit" of EML gates.

4. Why it works for Machine Learning

For tasks like symbolic regression, this is a game-changer. Instead of a neural network guessing between thousands of possible operators (addition, sine, square root, etc.), it only has to optimize one thing: the structure of the EML tree.

  • Uniformity: Every node in the tree is the exact same math.
  • Differentiability: Since $e^x$ and $\ln(x)$ are both differentiable, you can use backpropagation to "train" the constants within an EML circuit to fit a specific dataset.


Summary of the "Circuit"

Target FunctionEML Construction (Conceptual)Exponential$\operatorname{eml}(x, 1)$Negation$\operatorname{eml}(1, \operatorname{eml}(x, 1))$ (Yields $e - x$, then shift)Multiplication$\exp(\ln(x) + \ln(y))$TrigonometryDerived via complex inputs to $\exp$

Are you looking to implement this as a custom activation function or a primitive in a symbolic regression model?

To view or add a comment, sign in

More articles by LingJun Zhou

  • Graph Convolution Networks and Heat Diffusion

    Let’s first look at the essence of the problem: Every node in a graph is constantly changing its state under the…

    2 Comments

Others also viewed

Explore content categories