• 2021
• 2020
• 2019
• 2018
• 2017
• 2016
• 2015
• 2014
• 2013
• 2012
• 2011
• 2010
• 2009
• 2008
• 2007

# JWT and CORS

## almost 6 years ago | Rocky Jaiswal: Still Learning

This article was to be published in Healthy Code Magazine, India In this article let us look at how we can ensure secure APIs and test and develop the frontend and backend components easily and independently. The example we will choose is of a simple web-application, lets say a To ...

I love chocolate chip cookies, the more chocolate chips the better I’ve managed to narrow down my favourites to 5 brands1, Heavenly Bites, Subjectively Scrumptious, Hobbes’ Nobs, The Happiness Mill and Kant Touch This I’d like to consistently buy the brand with the most chocolate chips per cookie and so, for science, I pick up a box of each with 10 cookies per box. 1 2 3 4 5 6 7 8 9 10 11 12 import numpy as np from scipy import stats np.random.seed(1) # for repeatability # Each box of cookies will be generated by drawing from a poisson distribution with # heavenly_bites having the lowest mean (8) and kant_touch_this, the highest (16) heavenly_bites = stats.poisson(8).rvs(10) subjectively_scrupmtious = stats.poisson(10).rvs(10) hobbes_nobs = stats.poisson(12).rvs(10) the_happiness_mill = stats.poisson(14).rvs(10) kant_touch_this = stats.poisson(16).rvs(10) 1 2 3 4 # each number in the array represents the number of chocolate chips on a cookie heavenly_bites # => [ 2 4 6 11 4 10 7 6 6 10] Chocolate Chip Likelihood Just like earlier I need a way to measure the likelihood of seeing a particular set of cookies. For this problem I’m going to assume that each bakery produces cookies following a poisson distribution around some mean. The probability that a box was generated by a poisson distribution with mean, $\lambda$, is given by $$\prod_{i=1}^{10} \textrm{poisson}(\lambda)\textrm{.pmf}(\textrm{cookie_box}(i))$$ where pmf is the probability mass function and cookie_box(i) is the number of chocolate chips on cookie i. 1 2 3 4 5 6 7 # Likelihood for the heavenly_bites box at lambda = 8 and lambda = 16 np.product(stats.poisson(8).pmf(heavenly_bites)) # => 6.36e-12 np.product(stats.poisson(16).pmf(heavenly_bites)) # => 8.47e-27 These are the likelihood plots for Heavenly Bites, Hobbes’ Nobs and Kant Touch This (see how the peaks of the functions are near the true lambda values). Cookie Posteriors I’m going to assume that it’s impossible to have a cookie with more than 20 chips (or less than 0), I’m also choosing an initial prior that assigns equal probability to all possible values of $\lambda$ in this space. Unfortunately, there’s no clever hack I can use to easily calculate the posterior, I’ll have to resort to a numerical approximation of Bayes Rule. 1 2 3 4 5 6 7 8 9 10 11 12 # heavenly_bites_likelihood and cookie_prior are both long numpy arrays. # with dimensions (20000, 1) containing y values for x in [0, 20] but # in incrememnts of 0.001 # The unnormalized posterior, the numerator in Bayes' Rule heavenly_bites_un_posterior = np.multiply(heavenly_bites_likelihood, cookie_prior) # Approximating the integral in the denominator normalising_factor = np.sum(heavenly_bites_un_posterior) # Applying Bayes' Rule heavenly_bites_posterior = np.divide(heavenly_bites_un_posterior, normalising_factor) Heavenly Bites’ posterior is a lot more concentrated than the others but thats the way the cookie crumbles, it all depends on the amount of information each cookie box provides, a different set of cookies could have led to a different result. Here’s a gist of the function I used to figure out the intervals. Sleight of Hand The real $\lambda$ values are 8, 12 and 16; all within a 0.95 credible interval… so I have my answer, case closed? Well not really, I cheated, picking a poisson model over any other guaranteed good results, after all, thats what I used to generate the cookie boxes in the first place. If I was really approaching this problem without this information, I might have chosen a gaussian model or something else entirely. I’m going to try that next. Update: A Clever Hack As Chris pointed out, a Gamma distribution forms a conjugate prior for a Poisson likelihood. A Gamma distribution has the form $\textrm{gamma}(a, b)$ where $a$ is the shape parameter and $b$ is the inverted scale parameter. If I have a chocolate chip distribution prior of $\textrm{gamma}(a, b)$ and then examine $n$ cookies, my updated posterior would be $$\textrm{gamma}(a + \sum\limits_{i=1}^n \textrm{cookie_box}(i), b + n)$$ An uninformative gamma prior looks like $\textrm{gamma}(a = \textrm{~0}, b = \textrm{~0})$ where ~0 is very small but still positive. 1 2 3 4 5 6 7 from scipy.stats import gamma # Updating the gamma(~0, ~0) prior shape = 0 + sum(heavenly_bites) inverted_scale = 0 + len(heavenly_bites) heavenly_bites_posterior = gamma(shape, scale=1.0/inverted_scale) The credible intervals are almost the same in this case, the biggest difference being the values for $\textrm{pdf}(\lambda)$ which is an effect of the numerical approximation. The relative shapes of the distributions seem identical. An efficient way of solving the same problem and also verifies the numerical approach. Named for the philosophers found in the excellent Socrates Jones flash game.