Thanks for the comment - uGMM neurons are not just "RBFs with log outputs". Each neuron is a mixture of Gaussians with trainable means, variances, and mixture weights, encoding uncertainty and multimodality that propagates through the network. The log-likelihood output is simply a consequence of this probabilistic formulation, not an innovation.
Thanks - That's correct, the Gaussian mixture parameters (mu, sigma, pi) are learned as functions of the input from the previous layer. So it’s still a feedforward net: the activations from layer x determine the mixture parameters for the next layer.
The reason the neuron’s output is written as a log-density Pj(y) is just to emphasize the probabilistic view: each neuron is modeling how likely a latent variable y would be under its mixture distribution.
Thanks - good question, in theory, the uGMM layer could complement CNNs in different ways - for example, one could imagine (as you mentioned):
using standard convolutional layers for feature extraction,
then replacing the final dense layers with uGMM neurons to enable probabilistic inference and uncertainty modeling on top of the learned features.
My current focus, however, is exploring how uGMMs translate into Transformer architectures, which could open up interesting possibilities for probabilistic reasoning in attention-based models.
Thanks for the comment. Just to clarify, the uGMM-NN isn't simply "Gaussian sampling along the parameters of nodes."
Each neuron is a univariate Gaussian mixture with learnable mean, variance, and mixture weights. This gives the network the ability to perform probabilistic inference natively inside its architecture, rather than approximating uncertainty after the fact.
The work isn’t framed as "replacing MLPs." The motivation is to bridge two research traditions:
- probabilistic graphical models and probabilistic circuits (relatively newer)
- deep learning architectures
That's why the Iris dataset (despite being simple) was included - not as a discriminative benchmark, but to show the model could be trained generatively in a way similar to PGMs, something a standard MLP cannot do. Hence, the other benefits of the approach mentioned in the paper.
Thanks for writing back! I appreciate the plan to integrate the two architectures. On that front, it might be interesting to have a future research section - like what would be uniquely good about this architecture if scaled up?
On ‘usefulness’ I think I’m still at my original question - it seems like an open theoretical q to say that the combination of a tripled-or-greater training budget, data size budget of the NN, and probably a close to triple or greater inference budget, the costs of the architecture you described, cannot be closely approximated by the “fair equivalent”-ly sized MLP.
I hear you that the architecture can do more, but can you talk about this fair size question I have? That is, if a PGM of the same size as your original network in terms of weights and depth is as effective, then we’d still have a space savings to just have the two networks (MLP and PGM) side by side.
That’s a fair question. You’re right that on paper a uGMM neuron looks like it “costs” ~3× an MLP weight. But there are levers to balance that. For example, the paper discusses parameter tying, where the Gaussian component means are tied directly to the input activations. In that setup, each neuron only learns the mixture weights and variances, which cuts parameters significantly while still preserving probabilistic inference. The tradeoff may be reduced expressiveness, but it shows the model doesn’t have to be 3x heavier.
More broadly: traditional graphical models were largely intractable at deep learning scale until probabilistic circuits, which introduced tractable probabilistic semantics without exploding parameter counts. Circuits do this by constraining model structure. uGMM-NN sits differently: it brings probabilistic reasoning inside dense architectures.
So while compute cost is real, the “fair comparison” isn’t just params-per-weight, it’s also about what kinds of inference the model can do at all, and the added interpretability of mixture-based neurons, which traditional MLP neurons don’t provide - it shares some spirit with recent work like KAN, but tackles the problem through probabilistic modeling rather than spline-based function fitting.
uGMM-NN is a novel neural architecture that embeds probabilistic reasoning directly into the computational units of deep networks. Unlike traditional neurons, which apply weighted sums followed by fixed nonlinearities, each uGMM-NN node parameterizes its activations as a univariate Gaussian mixture, with learnable means, variances, and mixing coefficients.
Fast forward a few years, now with the demand of learning-based software, there is no doubt that mathematics is indeed a requirement for data-driven software.
Although you may not need to know the math to build such software, given the tools that we have available. However you would need to know the math in order to understand the limitations, pros/cons of a particular ML algorithm for a given application (at the very least).