Probability Theory Origins
Probability theory traces back to 16th-century gamblers. They sought to understand random outcomes. Blaise Pascal and Pierre de Fermat later formalized these concepts, laying the groundwork for modern probability theory and combinatorics.
Defining Probability
Probability quantifies the likelihood of events, ranging from 0 (impossible) to 1 (certain). It's the ratio of favorable outcomes to the total possible outcomes, assuming each outcome is equally likely.
Independent vs. Dependent
Events are independent if one's occurrence doesn't affect another's. With dependent events, one event influences the other's likelihood. This distinction is crucial for calculating probabilities in complex scenarios.
Conditional Probability
Conditional probability assesses the chance of an event, given another has occurred. It's vital in fields like medicine for understanding the likelihood of symptoms given a disease, altering prior probability estimates.
Bayes' Theorem
Bayes' Theorem revises probabilities with new evidence. It's transformative, underpinning modern fields like machine learning. Surprisingly, it was initially disregarded until its utility in decision theory and statistics became evident.
Law of Large Numbers
This law posits that as trials increase, the experimental probability converges to the theoretical. Astonishingly, it implies that randomness has a predictable structure when observed over a long period.
Monty Hall Problem
A brain teaser showcasing counter-intuitive probability. When presented with three doors, one with a prize, switching choices after a non-prize door reveal increases winning chances. A vivid example of probability theory's non-intuitive nature.