It's a thread of 4 tweets. Twitter has a limit of 280 characters per tweet, so usually people split their text into multiple tweets. The link is in the 4th one, as you might expect.
I'm logged out and have adblockers on. I used to be able to see the posts that come after, but now I only see the first one. Maybe that's a new change.
It's traditionally fairly challenging to implement a relatively straightforward graphical model, but automatic differentiation changed that. Some people are working on getting the best of graphical models and black box models (like neural networks), which is where things like this come in.
Because it's jax, you get autograd and gpu/tpu acceleration for free, which further lets you stick these things in with other models, including black box models like NNs.
An example application is in sentence part of speech tagging (noun, adjective, etc). You could model each word's part of speech as a separate prediction, but you know that "noun-verb-noun" is more common than "noun-noun-noun" so you know your predictions should influence each other. Stuff like this makes that easier.
edit:
The authors of this library also put a paper on arxiv describing it https://arxiv.org/pdf/2308.03291.pdf The abstract gives a good sense of what they're going after:
> The development of deep learning software libraries enabled significant progress in the field
by allowing users to focus on modeling, while
letting the library to take care of the tedious and
time-consuming task of optimizing execution
for modern hardware accelerators. However,
this has benefited only particular types of deep
learning models, such as Transformers, whose
primitives map easily to the vectorized computation. The models that explicitly account for
structured objects, such as trees and segmentations, did not benefit equally because they
require custom algorithms that are difficult to
implement in a vectorized form.
SynJax directly addresses this problem by providing an efficient vectorized implementation
of inference algorithms for structured distributions covering alignment, tagging, segmentation, constituency trees and spanning trees.
> With SynJax we can build large-scale differentiable models that explicitly model structure in the data. The code is available at https://github.com/deepmind/synjax.
The things in SynJax aren't neural nets themselves as far as I see poking around, it's a bunch of different [complicated] probability distributions that you might encounter when modeling neural nets or related tasks stochastically. More of a toolbox of components implemented in JAX for advanced statistical modeling inside of a NN. Could also be useful apart from NNs, too.
It’s useful when you’re modeling a structure, e.g., P(x, y) or P(x | y), rather than a distribution. This is helpful in situations where generating samples is bad (CRFs) or your data imposes dependent features (HMMs). It appears from the README (and the authors’ previous work) this is useful for arbitrary structures too.
The repo isn't even linked in the X post, which is strange.