2369 shaares
447 results
tagged
stats
La somme de log-normale s'approche par une distribution de log-normale. Bidouillage, mais intéressant
À garder sous le coude pour les formations
Comment redémarrer un MCMC. Et définir son propre sampler avec Nimble
Jolie explication du paradoxe de Simpson
Le processus de Poisson est une version continue du processus de Bernoulli.
M'a l'air intéressant pour l'ABC.
Diviser par (n-1) dans le calcul de la variance permet de corriger un biais dans l'estimation de la variance de la population. Mais un biais tellement faible que c'est peanuts, et qu'il n'y a quasi-aucun cas de figure dans lequel la correction de ce biais pourrait se révéler utile. Simpson résume bien :
The n vs (n−1) denominator for a variance estimator is a curiosity. It is the source of thrilling (Not thrilling) exercises or exam questions. But it is not interesting.
It could maybe set up the idea that MLEs are not unbiased. But even then, the useless correction term is not needed. Just let it be slightly biased and move on with your life.
Because if that is the biggest bias in your analysis, you are truly blessed."
Amen.
The n vs (n−1) denominator for a variance estimator is a curiosity. It is the source of thrilling (Not thrilling) exercises or exam questions. But it is not interesting.
It could maybe set up the idea that MLEs are not unbiased. But even then, the useless correction term is not needed. Just let it be slightly biased and move on with your life.
Because if that is the biggest bias in your analysis, you are truly blessed."
Amen.
La distribution de Moschopoulos est une somme de gamma avec des paramètres différents.
Modéliser le log de la moyenne d'une variable log-normale, n'est pas la même chose que de modéliser la moyenne du log d'une variable log-normale. Une bonne explication.
Interpréter les différences de SE quand on utilise elpd. Super réponse de Vehtari.
Une explication du filtre de Kalman
À garder sous le coude.
À lire
Super intéressant
Modèles de survie spatio-temporels
There’s been a movement which has said that most research is wrong. It’s making people feel they’re doing something wrong, but that’s not the problem. The problem is that the publication system pushes you because you can only publish if you get a good, that is, small, p-value [a statistical test that indicates whether results could be due to chance]. Researchers then massage the data until they get the p-value and then it’s not reproducible. But if we were much more transparent and said, “You’re allowed to publish things which are significant or not significant because it’s useful down the road and just publish all your data and the code you used for the analyses” – if you’re transparent about what you’re doing, there’s much less opportunity to shoehorn the data into some wrong conclusion.
I feel that people misuse summaries in statistics. They feel as if statistics is going to summarize everything into one value, as if one p-value is going to summarize five years of work. It’s ridiculous. Everything is multidimensional, it’s complex. But if we could publish more of the negative results and all of the data, we would advance science much faster, because people would get insight from the negative results.
I feel that people misuse summaries in statistics. They feel as if statistics is going to summarize everything into one value, as if one p-value is going to summarize five years of work. It’s ridiculous. Everything is multidimensional, it’s complex. But if we could publish more of the negative results and all of the data, we would advance science much faster, because people would get insight from the negative results.
Intéressant.
Intéressant
Très intéressant !
L'estimation d'erreurs types pour les régressions lasso, c'est visiblement un sacré bordel.