Probability – Thoughts to Deep Learning by Goodfellow et Al.

About probability and why we need it in ML.

The book Deep Learning by Goodfellow et al in chapter 3.1, three arguments for probability in computer science especially machine learning are listed. According to the text, these arguments are summarized or inspired by Pearl (1988).

The first argumnt is that there exists inherent stochasticity. As an example quantum mechanics is brought up. Other examples are theoretical scenarios that are postulated to be random like card games with perfectly shuffled card decks.

The second is „incomplete observability“. The authors say that even a deterministic system can be stochastic if not all variables are observed/measured. For example the Monty Hall problem. The authors add that this problem can be pure deterministic from the contestants point of view.

The third one is „incomplete modelling“. Which can occur when we don’t have enough information for a complete model.

 

In my opinion argument number one is just a good example for a case  of two and three.

If we could create a perfect model of subatomic particles and if we could observe every aspect of them, we should be able to predict their behaviour.

This is of course not possible today. But in theory, when there is no randomness in reality like J. Schmidhuber states in one of his puplications on his website that there exists no randomness. When we think about this, argument one is just an example for argument two or three.

In the end probability is just an artificial construct or a tool to help build generic models which are forced to learn abstract representations because no model would ever be able to gather all observations on a subatomic level.

 

Schreibe einen Kommentar

Deine E-Mail-Adresse wird nicht veröffentlicht. Erforderliche Felder sind mit * markiert.