Previously discussed Granovetter threshold model is just one of the numerous simple collective action model. This time we continue the same topic by considering another, yet a bit more complex, riot model, which was proposed by Epstein in [1]. This model is rather interesting in a sense that it is not static as original Granovetter model is. It has interesting temporal dynamics builtin. In a recent paper by British mathematicians [2] this model was applied to explain the patterns observed in 2013 London riots. So let us see… Continue reading “Epstein’s riot model”

# Granovetter threshold model

Here on Physics of Risk we once again present you a model of collective action. Last time we have considered Standing Ovation Model by Miller and Page, in earlier years we have written a lot about Kirman and Bass models, as well as correspondence between them. There is another classic model, which will be covered in this post, which models human intention to join the collective political action with inherent risk. In this text we will consider a threshold model proposed by Mark Granovetter. Continue reading “Granovetter threshold model”

# Standing ovation model

It has been a long time since last interactive model on Physics of Risk. This time we return to a problem we have already previously considered, but which we did not model.

From time to time almost everyone of us has an opportunity to go see a play. Afterwards everyone has to make a choice – to applaud or not to. It appears to be free choice, but actually it isn’t as there are various social feedback loops in play. This problem was considered as a simple agent-based model in a paper by Miller and Page [1]. In this text we will briefly introduce you to it. Continue reading “Standing ovation model”

# J.-B. Michel: The mathematics of history

In this TED talk J.-B. Michel shares his experience in applying mathematical methods to history. According to him, mathematics combined with digitized historical data helps to reveal some interesting patterns. See for yourself!

# Checking hypotheses and the problem of p-values

Vast majority of scientific research begins with an idea how the world works according to the proposer. The proposer formulates his hypothesis and tries to prove it using scientific method, usually checking his experiments or observations using varying statistical tools. These tools are used to process the collected data and either confirm his initial hypothesis or to reject it in comparison to the alternatives.

One of the methods is the so-called critical value approach [1]. This method relies on the researcher to set a precision standard to the statistical test and accept or reject hypothesis based on it. Usually different branches of science have their own set of rules how small the error could be tolerated. For example in life sciences it is common to see that most of published papers report statistical significance of \( p<0.05 \) (meaning that probability of error is less than \( 5\% \)), while in physics it is rather frequent to hear about the precision of \( 5 \sigma \) (probability of error is less than \( 5.7 \cdot 10^{-5} \% \)).

From the first glance it appears that the methods lacks drawbacks. But in the context of current science publishing tradition – mostly positive results being published – the drawbacks are evident. All statistical methods rely on numerous samples being made – so in order for these kind of test to work numerous independent groups should repeat the same experiment and obtain similar conclusion. Otherwise there is a significant possibility of a positive result being just a successful fluke. Having in mind pressure to publish more pressure there is also a risk that the same research group would repeat the same experiment until getting the desired statistical significance (waiting for a fluke to happen).

I did my best to enlighten you to this problem, but there is a rather significant chance that Hank Green will do better in this SciShow video I invite you to see.

For the ones who are more interested in technical detail I would like suggest reading a draft by Nicholas Nassim Taleb [2].

# Extra Credits: Hyperinflation MMOs

In computer games players control characters and earn money by doing quests and killing monsters. They spend these money for upgrades and other neat stuff (such as own house, animal and etc.). But this mechanic has inherent problem – earned money are created literally from nothing. The money, which goblin chieftain had, were never used in games economy. They are just a reward for player. The same, though less obviously, story is with NPC – they give the same task they were programmed to give and the reward is also preset. Game economics has nothing to do with reward given. If player is just one, then there is no problem, but if you, as a game designer, have thousands of players… you will soon have to face a problem of hyperinflation. More on what it is and how game designers combat it in the following Extra Credits video.

# Numberphile: The problems with “Secret Santa”

Some problems of the popular Christmas “office-entertainment”, “Secret Santa” game, are discussed in this Numberphile video.