Zealous modeler. Annoying statistician. Reluctant geometer. Support my writing at http://patreon.com/betanalpha. He/him.

Discounted tickets for black, indigenous people of color in high-income countries or those from low and middle-income countries will likely be available. If you are interesting then reach out via the “Contact the Organizer” button at the bottom of the above link.

I know that training budget are evaporating in many companies but if you do have some funds and want to build out a strong foundation for Bayesian modeling and inference then you might be interest in my remote, open-enrollment courses starting next month, events.eventzilla.net/e/principled....

Despite the promise of big data, inferences are often limited not by the size of data but rather by…

events.eventzilla.netGoing to start saying that "N < infinity" is a missing data problem.

It's not deprecated. It's vintage.

This May join us to build a strong foundation that will will allow you to develop robust Bayesian analysis bespoke to your applications, events.eventzilla.net/e/principled....

In particular I'm not thinking of adding entire simplices together to get simplical/chain complexes or anything homology related. Everything contained within a fixed simplex.

Math peeps: Is there a well-defined notion of addition or multiplication on simplices? Specifically if p, q are points in a K-simplex then an (ideally not not necessarily commutative and associated) operator • such that p • q is another point of the same K-simplex?

That was awesome.

If you ever find yourself struggling to conceptualize/develop/implement interpretable statistical models in a world of black box methods, or are just interested in talking about them, I set up a Discord server for (narratively) generative modeling discussion some time back, discord.gg/gGUzQ5dg.

The Probabilistic Storytelling server is place for discussing all topics related to the modeling of explicit data genera | 109 members

discord.ggPlease do share with friends and colleagues! Reposts for maximum exposure are welcomed and appreciated. Only you can reduce bad uncertainty quantification.

It’s one thing to say that you say you embrace uncertainty. It’s another to manage the complex uncertainties that manifest in practical analyses. In my upcoming courses we’ll learn how to deal with uncertainties and their computational consequences, events.eventzilla.net/e/principled....

I will not comment on random threads with bad Bayesian modeling and inference advice without being tagged, I will not comment on random threads with bad Bayesian modeling and inference advice without being tagged, I will not...

**cough cough** professors forever locked into the methods and technologies they learn as grad students and postdocs **cough cough**

This is your regular reminder that "I did this thing 20 years ago" is not the same thing as "I have 20 years of experience." It's not even an indicator that you have any idea of how things are done now.

Who's interested in 19,000 words, coming to a whopping 162 pages in the rendered PDF with all of the exercises, on selection modeling?

Some juicy new writing currently available exclusively to my covector+ supporters on patreon dot com, www.patreon.com/posts/new-se....

Get more from Michael Betancourt on Patreon

www.patreon.com
Anyone still having problems with `#include

I thought the linker issues were resolved by now but I'm all of a sudden getting the error and can't find any stale links.

If you’re curious what the newsletters look like or are simply interested in reading without signing up then you can always view previous newsletters at sendfox.com/symplectomor....

The best and most affordable email marketing.

sendfox.comI'm sending out my next monthly newsletter tomorrow. If you want to keep up with my courses, writing, and even longer threads without having to slog through social media then you can sign up for free at sendfox.com/symplectomor....

This mailing list features monthly updates about recent and upcoming writing, talks, courses, and more from applied statistician Michael Betancourt.

sendfox.com
You don't model the data you may have wanted or expected.

You model the data you actually collected.

Regular reminder that "an outlier" doesn't exist. Outliers only exist in reference to some assumed distribution, they are essentially data that didn't come from your assumed distribution.

My god, it’s full of stars!A collection of stars surrounded by rectangles of various sizes all over a black background.

Want to master informative model critique with posterior retrodictive checks? Check out my remote courses being offered in May and June, events.eventzilla.net/e/principled....

For more on narratively generative modeling and strategies for developing narratively generative models of your own see betanalpha.github.io/assets/case_....

These skewed interpretations then lead to decisions and predictions that poorly generalize beyond that particular data set.

Incidentally this synergy is also why developing adequate models are so important in applied analyses. Models that are too rigid will often contorts themselves to fit observed data, pulling the individual parameter inferences away from their proper generative interpretations.

For many Bayesian methods are the first opportunity they have to build bespoke, interpretable models and not have to rely on a collection of rigid black boxes that are either incomparable if not outright inconsistent with their hard-earned domain expertise.

This is probably why “interpretable model” and “Bayesian inference” are synonymous for so many, especially in more applied fields.

There’s a natural synergy between Bayesian inference and narratively generative modeling. Not all Bayesian analyses use generative models and generative models are useful in other inferential approaches, but Bayesian inference using generative models is particularly productive.

At the same time if we embrace narratively generative modeling from the beginning then we already have a meaningful interpretation and prior modeling becomes much less onerous.

If we push through that struggle and establish a principled connection then we end up with an interpretable model that is easier to not only robustly apply to practical problems but also critique and improve as needed.

An interesting side effect of Bayesian methods is that developing an informative prior model is easiest when we can connect the parameters to our domain expertise. Prior modeling is outright frustrating when the parameters don’t admit a meaningful interpretation.

We may be able to learn parameter configurations consistent with observed data and use them to inform in-sample predictions, but we won’t have any idea how to adjust those configurations to account for new circumstances and inform predictions that generalize.

When we treat models as black boxes we lose this connection between the model and the system we’re modeling. Parameters become unlabeled knobs; we can turn them but we’re largely oblivious to their consequences.

The linear response mu = alpha + beta * x models how some latent phenomena responds, or perhaps approximately responds, to external variations in the covariate x.

For example a regression model

y ~ normal(alpha + beta * x, sigma)

becomes narratively generative when alpha, beta, and sigma corresponding to meaningful phenomena and not just mathematical patterns.

In particular the individual parameters of narratively generative model govern a distinct aspect of the underlying data generation process.

This interpretability provides a way of connecting statistical models to our precious domain expertise, facilitating model development, critique, and more.

Narratively generative models interpret statistical models as collections of data generating processes that hopefully well-approximate some true data generation process.

Who’s up for a nice thread about narratively generative modeling?

Curious about the assumptions underlying regression methods and the consequences of applying regression methods to systems where those assumptions don’t hold? Come learn about regression methods from a Bayesian modeling perspective in my summer courses, events.eventzilla.net/e/principled....

Despite the promise of big data, inferences are often limited not by the size of data but rather by…

events.eventzilla.netThis work wouldn’t be possible without my generous supporters on patreon dot com, www.patreon.com/betanalpha. If my writing has helped your work, research, or teaching then consider supporting.

writing about modeling building and statistical inference.

www.patreon.comRealizing a consistent decomposition, however, requires careful useful use of all of the probability theory that we have developed to this point. This is especially true for the decomposition of probability density function representations of probability distributions.

Conditional probability theory allows us to break up probability distributions into smaller, more manageable pieces. This is useful for not only simplifying calculations but also building up sophisticated probability distributions in the first place.

My appreciation for conditional probability theory? It is very much unconditional.

In my latest chapter you, too, can learn why conditional probability theory is so great.

HTML: betanalpha.github.io/assets/chapt...

PDF: betanalpha.github.io/assets/chapt...

Why didn't any of you jerks tell me about the 69 Love Songs 26th Anniversary shows before the secondary market exploded?