Probability and Statistics 1 (Summer 2024)

This course covers a number of topics around randomness. The first half will be dedicated to Probability theory, the second half to Statistics.

Prerequisites: "Discrete Mathematics", Analysis 1 (essential). Linear algebra 1 & 2, Analysis 2, Combinatorics & Graphs 1 (recommended).

The exam will be written.

Reading materials:

Tutorials

The tutorials will be held by Sudatta Bhattacharya , Ida Kantor, and Lluís Sabater Rojas. Old zápočets are NOT valid.

Syllabus

  • Lecture 1 (20.2):
    Introduction. Two interpretations of probability. Probability theory vs Statistics. Problems with the naïve approach: Bertrand's paradox, Vitali sets (both non-examinable).
    Probability spaces, Kolmogoroff's axioms. Examples: uniform and geometric distributions. Basic properties of probability measures (monotonicity, inclusion-exclusion, continuity, union bound).
    References: [RS] Lecture 1. [MIT] Lecture 1. Wikipedia: Bertrand's paradox, Vitali set (see also this video).
  • Lecture 2 (27.2):
    Conditional probability, independence of two events. Operations with conditional probabilities: chain rule, the law of total probability. Application: Gambler's ruin. Bayes theorem and its interpretation. Independence for a family of events (vs pairwise independence). Discrete random variables: definition.
    References: [RS] Lectures 1 and 2, [MIT] Lecture 2 and [MIT] Lecture 3
  • Lecture 3 (5.3):
    Discrete random variables: probability mass function, cumulative distribution function. Common discrete distributions: Bernoulli, Binomial, Geometric, Hypergeometric, Poisson. Poisson as an approximation for Binomial. A heuristic example with meteorites. Operations on random variables: applying univariate and bivariate functions.
    Expectation of a discrete random variable: introduction and definition.
    References: [RS] Lecture 3, [MIT] Lecture 5.
  • Lecture 4 (12.3):
    Existence of the expectation: issues with (absolute) convergence. Properties of the expectation: monotonicity, linearity. LOTUS: law of the unconscious statistician. Examples: expectations of Bernoulli, binomial, geometric. Conditional expectation, law of total expectation. The variance: motivation, definition, basic properties.
    References: [RS] Lecture 4, [MIT] Lectures 5-6.
  • Lecture 5 (19.3):
    Joint distribution of multiple random variables: definition and a basic example. Marginals, conditionals. Independence of discrete random variables and its consequences for the expectation and variance. The convolution formula. Continuous random variables: an introdutction. The Borel sigma algebra, the Lebesgue measure. The uniform measure on an interval.
    References: [RS] Lectures 5-6, [MIT] Lectures 6-8.
  • Lecture 6 (26.3):
    General random variables: definition. Probability measures on the reals. Density and cumulative distribution functions. Common continuous distributions: Exponential, Cauchy, Normal, Gamma. Continuous random variables: pushforward measure, probability density function. Expectation of coninuous rv's. LOTUS: continuous version.
    References: [RS] Lectures 6-7, [MIT] Lecture 8.
  • Lecture 7 (2.4):
    Expectation of continuous rv's: LOTUS, linearity. Variance and its properties. Joint distribution and independence. Examples: multivariate uniform and normal distributions. Covariance and correlation.
    References: [RS] Lecture 8, [MIT] Lectures 8-9.
  • Lecture 8 (9.4):
    Covariance and correlation (continued), the probabilistic Cauchy-Schwarz inequality. The convolution formula for continous rv's. The Markov and Chebyshev inequalities. The weak law of large numbers. Convergence almost surely. A sequence of rv's convergent in probability but not a.s.
    References: [RS] Lecture 9, [MIT] Lectures 11, 19.
  • Lecture 9 (16.4):
    The strong law of large numbers (statement). Applications: the Monte Carlo method for computing volumes; Borel's normal numbers. Weak convergence. The central limit theorem (statement, discussion). A coin toss example: Chebyshev vs CLT's normal approximation. Introduction to Statistics.
  • Lecture 10 (23.4): Estimators and their properties.
    References: [RS] Lecture 11, [MIT] Lecture 23.
  • Lecture 11 (30.4): Maximum likelihood estimators. Confidence intervals.
    References: [RS] Lecture 12, [MIT] Lectures 23, 24.