I've been writing a couple of web services lately that use Auth0 for identity management. It's a great platform that makes working with different identity providers a breeze. One thing that I couldn't work out how to do at first was to quickly build a new JWT1 from an

Guess what! You can now start reading the fully-revised Clojure for the Brave and True through No Starch Press’s early access program! Use the coupon code ZOMBIEHUGS for 30% off! You’ll get the first four chapters, complete with fun new illustrations and professional editing. The fifth chapter will be up soon, and the final release is scheduled for June 25th. It’s also a way to give feedback (which I will appreciate very much!) before it’s cast in adamantium for printing, or whatever it is you do to books to print them these days. If you’re enjoying the book, I’d love it if you shared this blog post or tweeted about the early access :) I’m really happy with how the book has turned out. The No Starch editors have been awesome at helping to improve the clarity of the book, making it even more beginner-friendly. Most of the chapters have seen major rewrites, with significant improvements to the structure and content. I also appreciate all the comments, emails, and pull requests I’ve gotten. It warms the cockles of my heart to receive such generosity. I think this book will be useful for Clojure beginners, and I’m especially happy that it will remain available for free online. (I’ll be updating the site with the new book once it’s published, i.e. when I actually have time again for the first time in 2 years :) I’m grateful for all the work the Clojure community has put into creating such a delightful language and useful ecosystem, and I’m glad to make my own little contribution. I hope you enjoy it!

I did an analysis on Delhi Assembly election – 2015 and published it on shiny apps, visit into the link and post your comments below. ggplot is an R package for data exploration and producing plots. It produces fantastic-looking graphics and allows one to slice and dice one’s data in many different ways. First, install and include […]

I was working on a small and simple application built with AngularJS the other day. As with most applications like this, I start with a single JavaScript file caled app.js and no module system. In the past I've used RequireJS with AngularJS. It's an awful mistake. It leads to

tl;dr? Check your mongodb.conf bind_ip settings to make sure that you're not allowing connections only from localhost. This may just end up being the first part of a wider troubleshooting guide, but this is one I've spent a few hours fixing, after assuming I was making terrible

While Chris has done an excellent job explaining this concept, I’m having too much fun with my coin example to stop now. A Maybe Biased Coin Suppose I try my hand at casting my own coin. If my work is even half as good, my coin is going to end up being rather uneven and most likely biased. Unlike the double headed coin problem, I know very little about the coin’s bias. One (not particularly good) choice for a prior is to assign equal probability to all possibilities. In my last post, $\theta$ could only be one of two values and the prior reflected that. The height of a bar represented the probability mass for each value of $\theta$ and the sum of all heights was 1. In this case $\theta$ can take any value between 0 and 1. It doesn’t make sense to talk about the probability for a specific value of $\theta$, the probability at any one point is infinitesimal. Here the prior is plotted as a probability density function, I can calculate the probability mass between any two points $(a, b)$ by integrating $\mathrm{pdf}(\theta)$ over $a$ and $b$ Since this is a probability density function, $\int_{0}^{1} \mathrm{pdf}(\theta)\cdot d\theta = 1$ Continuous Likelihood The binomial distribution is a function which plots the probability of any independent binary event (like a coin flip) succeeding for a number of attempts. If a binary event has a probability $\theta$ of succeeding and succeeds k times out of N $$\mathrm{P}(\textrm{k successes, N attempts} | \theta) = \binom{N}{k}\cdot\theta^{k}\cdot(1-\theta)^{N-k}$$ Imagine I flip my coin 10 times and end up with only 2 heads. For an unbiased coin, this is fairly unlikely $$\mathrm{P}(\textrm{2 successes, 10 attempts} | \theta=0.5) $$ $$ = \binom{10}{2}\cdot(0.5)^{2}\cdot(1 - 0.5)^{10 - 2} $$ $$ = 0.001 $$ Just like last time with a larger number of flips, the likelihood for most values of $\theta$ approaches 0. With additional flips, the likelihood spread gets smaller with the highest mass at $\frac{k}{N}$. Continuous Posteriors I can update the posterior using Bayes’ rule $$ \mathrm{posterior}(\theta) = \frac{\mathrm{likelihood}(\theta) \cdot \mathrm{prior}(\theta)}{\int_{0}^{1} \mathrm{likelihood}(\theta) \cdot \mathrm{prior}(\theta)\cdot d\theta} $$ Usually I’d have to approximate a solution using numerical integration but there’s a simpler solution for this particular type of problem. Conjugate Priors If the prior and the posterior belong to the same function family, it can make computing the posterior much simpler. For example, if my prior is Gaussian and my likelihood is Gaussian then my posterior will also be Gaussian, i.e. Gaussian functions are conjugate to themselves. In this case, since my likelihood function is a binomial distribution, its conjugate prior is a beta distribution. Specifically, if my prior is of the form $\mathrm{beta}(a, b)$, where a and b are the parameters of the distribution, and the likelihood function is a binomial distribution with N attempts and k successes, then the posterior would be a beta distribution with the parameters a + k and b + N - k. Updating the posterior reduces to simple addition. Updating the Posterior The uniform prior is the same as $\mathrm{beta}(1, 1)$ For 2 heads out of 10 flips and the prior $\mathrm{beta}(1, 1)$ $$ \mathrm{posterior}(\theta) $$ $$ = \mathrm{beta}(a + k, b + N - k) $$ $$ = \mathrm{beta}(1 + 2, 1 + 10 - 2) $$ $$ = \mathrm{beta}(3, 9) $$ Since pdf is a function for probability density and not probability mass, $\mathrm{pdf}(\theta)$ can be greater than 1 as long as the integral over (0, 1) is 1. Credibility It doesn’t make sense to ask for the probability of a particular $\theta$ but its useful to know the probability of a range. Knowing that $P(0.1\lt\theta\lt0.3) = 0.95$ is far better than where I started, especially as the range gets smaller. This is a credible interval and there are several ways to pick one. I like to pick the highest density interval, the shortest range (or set of ranges) that contains a certain amount of probability mass (0.95 for example). For a unimodal distribution, like a beta distribution, there is only 1 highest density interval. Making a Decision There will always be uncertainty and not making a decision is often not an option. In this situation, I have a few choices. I could pick the mean, which is the expected value of $\theta$, or the mode, which is the most likely value of the distribution (the peak). a b mode mean variance 1 2 0 0.33 0.05 2 5 0.2 0.29 0.03 3 9 0.2 0.25 0.01 Either way, with additional flips the variance drops and both values of central tendency more accurately predict $\theta$ Complications But what about if my $\theta$ changes over time or I have events with multiple possible values and can’t use beta distributions or even numerical integration. More on that next time.

While Chris has done an excellent job explaining this concept, I’m having too much fun with my coin example to stop now. A Maybe Biased Coin Suppose I try my hand at casting my own coin. If my work is even half as good, my coin is going to end up being rather uneven and most likely biased. Unlike the double headed coin problem, I know very little about the coin’s bias. One (not particularly good) choice for a prior is to assign equal probability to all possibilities. In my last post, $\theta$ could only be one of two values and the prior reflected that. The height of a bar represented the probability mass for each value of $\theta$ and the sum of all heights was 1. In this case $\theta$ can take any value between 0 and 1. It doesn’t make sense to talk about the probability for a specific value of $\theta$, the probability at any one point is infinitesimal. Here the prior is plotted as a probability density function, I can calculate the probability mass between any two points $(a, b)$ by integrating $\mathrm{pdf}(\theta)$ over $a$ and $b$ Since this is a probability density function, $\int_{0}^{1} \mathrm{pdf}(\theta)\cdot d\theta = 1$ Continuous Likelihood The binomial distribution is a function which plots the probability of any independent binary event (like a coin flip) succeeding for a number of attempts. If a binary event has a probability $\theta$ of succeeding and succeeds k times out of N $$\mathrm{P}(\textrm{k successes, N attempts} | \theta) = \binom{N}{k}\cdot\theta^{k}\cdot(1-\theta)^{N-k}$$ Imagine I flip my coin 10 times and end up with only 2 heads. For an unbiased coin, this is fairly unlikely $$\mathrm{P}(\textrm{2 successes, 10 attempts} | \theta=0.5) $$ $$ = \binom{10}{2}\cdot(0.5)^{2}\cdot(1 - 0.5)^{10 - 2} $$ $$ = 0.001 $$ Just like last time with a larger number of flips, the likelihood for most values of $\theta$ approaches 0. With additional flips, the likelihood spread gets smaller with the highest mass at $\frac{k}{N}$. Continuous Posteriors I can update the posterior using Bayes’ rule $$ \mathrm{posterior}(\theta) = \frac{\mathrm{likelihood}(\theta) \cdot \mathrm{prior}(\theta)}{\int_{0}^{1} \mathrm{likelihood}(\theta) \cdot \mathrm{prior}(\theta)\cdot d\theta} $$ Usually I’d have to approximate a solution using numerical integration but there’s a simpler solution for this particular type of problem. Conjugate Priors If the prior and the posterior belong to the same function family, it can make computing the posterior much simpler. For example, if my prior is Gaussian and my likelihood is Gaussian then my posterior will also be Gaussian, i.e. Gaussian functions are conjugate to themselves. In this case, since my likelihood function is a binomial distribution, its conjugate prior is a beta distribution. Specifically, if my prior is of the form $\mathrm{beta}(a, b)$, where a and b are the parameters of the distribution, and the likelihood function is a binomial distribution with N attempts and k successes, then the posterior would be a beta distribution with the parameters a + k and b + N - k. Updating the posterior reduces to simple addition. Updating the Posterior The uniform prior is the same as $\mathrm{beta}(1, 1)$ For 2 heads out of 10 flips and the prior $\mathrm{beta}(1, 1)$ $$ \mathrm{posterior}(\theta) $$ $$ = \mathrm{beta}(a + k, b + N - k) $$ $$ = \mathrm{beta}(1 + 2, 1 + 10 - 2) $$ $$ = \mathrm{beta}(3, 9) $$ Since pdf is a function for probability density and not probability mass, $\mathrm{pdf}(\theta)$ can be greater than 1 as long as the integral over (0, 1) is 1. Credibility It doesn’t make sense to ask for the probability of a particular $\theta$ but its useful to know the probability of a range. Knowing that $P(0.1\lt\theta\lt0.3) = 0.95$ is far better than where I started, especially as the range gets smaller. This is a credible interval and there are several ways to pick one. I like to pick the highest density interval, the shortest range (or set of ranges) that contains a certain amount of probability mass (0.95 for example). For a unimodal distribution, like a beta distribution, there is only 1 highest density interval. Making a Decision There will always be uncertainty and not making a decision is often not an option. In this situation, I have a few choices. I could pick the mean, which is the expected value of $\theta$, or the mode, which is the most likely value of the distribution (the peak). a b mode mean variance 1 2 0 0.33 0.05 2 5 0.2 0.29 0.03 3 9 0.2 0.25 0.01 Either way, with additional flips the variance drops and both values of central tendency more accurately predict $\theta$ Complications But what about if my $\theta$ changes over time or I have events with multiple possible values and can’t use beta distributions or even numerical integration. More on that next time.

After spending enough time on installing Oracle SQL Developer and respective issues, I thought of writing few easy steps to install it. Hope it would help and save your time.Steps to install Oracle SQL Developer(for windows):Download SQL Developer from Oracle site. DownloadDownload JDK if it is not installed on your machine. DownloadInstall JDK and then unzip SQL DeveloperClick on sqldeveloper.exe, it will ask for java.exe pathOpen the path where JDK is installed. It would be something like "C:\Program Files\Java\jdk1.7.0_40\jre\bin" depeneding upon which version you install.If it is giving any exception, remove the entry [SetJavaHome] from sqldeveolper.conf file from "..\sqldeveloper\sqldeveloper\bin\sqldeveloper.conf" and give the correct path of java.exe file.Please provide your feedback if you like the post.

Plotting Priors I often find it useful to plot probability functions, it gives me a better idea on how my probabilities are being updated. Take the double headed coin problem. A jar has 1000 coins, of which 999 are fair and 1 is double headed. Pick a coin at random, and toss it 10 times. Given that you see 10 heads, what is the probability that the next toss of that coin is also a head? This plot represents my prior probability for $\theta$, the probability that the coin I pull from the jar will land heads. Since most of the coins are unbiased (999 out of a 1000), most of the mass is concentrated over 0.5. Theres an invisible sliver above 1.0 for the double headed coin. To calculate P(heads), I take the sum over $\theta$ weighted by $P(\theta)$ $$ = \sum\limits_{i=1}^N \theta_i \cdot P(\theta_i) $$ $$ = 0.5 \cdot \frac{999}{1000} + 1.0 \cdot \frac{1}{1000} $$ $$ P(\textrm{ heads }) = 0.5005 $$ Coin Flip Likelihood After seeing new evidence, to compute a posterior I need both, the prior and the likelihood of the evidence. The posterior is the normalised product of the prior and the likelihood. For an unbiased coin, the probability of z heads in a row is $(0.5)^{z}$, for the double headed coin its $(1)^{z}$ which is $1$. In general, the probability of landing z heads in a row for a coin is $(\theta)^{z}$ when $P(\textrm{ heads }) = \theta$ (Plotting the likelihood for several flips) See how quickly the likelihood falls for lower values of $\theta$, 10 heads in a row is very unlikely and the plot after 10 flips reflects that. While the likelihood for individual $\theta$s is a probability, the likelihood plot isn’t a probability distribution and doesn’t need to sum to 1. Plotting Posteriors I can update the posterior at every $\theta$ using Bayes’ rule $$ \mathrm{posterior}(\theta) = \frac{\mathrm{likelihood}(\theta) \cdot \mathrm{prior}(\theta)}{\sum\limits_{i=1}^N \mathrm{likelihood}(\theta_i) \cdot \mathrm{prior}(\theta_i)} $$ The key take away here is that the posterior always lies somewhere between the prior and the likelihood. The complicated looking denominator is only there to ensure that the posterior sums up to 1 making it a valid probability distribution. The likelihood function is exponential in z. The first few flips barely change the distribution but by z=10 either kind of coin is equally likely. Calculating $ P(\textrm{ heads }) $ for each $ z $ $ P(\textrm{ heads } | z=\space\space1) = 0.5 \cdot 0.998 + 1.0 \cdot 0.002 = 0.501 $ $ P(\textrm{ heads } | z=\space\space5) = 0.5 \cdot 0.969 + 1.0 \cdot 0.031 = 0.516 $ $ P(\textrm{ heads } | z=10) = 0.5 \cdot 0.494 + 1.0 \cdot 0.506 = 0.753 $ Further Reading Plotting posteriors might give you insights you otherwise would have missed and it hints at even more. Bayes Rule works just as well for continuous functions, sums become integrals but the core idea remains the same. There are even neat hacks to make some of those calculations trivial.

Plotting Priors I often find it useful to plot probability functions, it gives me a better idea on how my probabilities are being updated. Take the double headed coin problem. A jar has 1000 coins, of which 999 are fair and 1 is double headed. Pick a coin at random, and toss it 10 times. Given that you see 10 heads, what is the probability that the next toss of that coin is also a head? This plot represents my prior probability for $\theta$, the probability that the coin I pull from the jar will land heads. Since most of the coins are unbiased (999 out of a 1000), most of the mass is concentrated over 0.5. Theres an invisible sliver above 1.0 for the double headed coin. To calculate P(heads), I take the sum over $\theta$ weighted by $P(\theta)$ $$ = \sum\limits_{i=1}^N \theta_i \cdot P(\theta_i) $$ $$ = 0.5 \cdot \frac{999}{1000} + 1.0 \cdot \frac{1}{1000} $$ $$ P(\textrm{ heads }) = 0.5005 $$ Coin Flip Likelihood After seeing new evidence, to compute a posterior I need both, the prior and the likelihood of the evidence. The posterior is the normalised product of the prior and the likelihood. For an unbiased coin, the probability of z heads in a row is $(0.5)^{z}$, for the double headed coin its $(1)^{z}$ which is $1$. In general, the probability of landing z heads in a row for a coin is $(\theta)^{z}$ when $P(\textrm{ heads }) = \theta$ (Plotting the likelihood for several flips) See how quickly the likelihood falls for lower values of $\theta$, 10 heads in a row is very unlikely and the plot after 10 flips reflects that. While the likelihood for individual $\theta$s is a probability, the likelihood plot isn’t a probability distribution and doesn’t need to sum to 1. Plotting Posteriors I can update the posterior at every $\theta$ using Bayes’ rule $$ \mathrm{posterior}(\theta) = \frac{\mathrm{likelihood}(\theta) \cdot \mathrm{prior}(\theta)}{\sum\limits_{i=1}^N \mathrm{likelihood}(\theta_i) \cdot \mathrm{prior}(\theta_i)} $$ The key take away here is that the posterior is simply the prior scaled by the likelihood and then renormalised. The complicated looking denominator is only there to ensure that the posterior sums up to 1 making it a valid probability distribution. The likelihood function is exponential in z. The first few flips barely change the distribution but by z=10 either kind of coin is equally likely. Calculating $ P(\textrm{ heads }) $ for each $ z $ $ P(\textrm{ heads } | z=\space\space1) = 0.5 \cdot 0.998 + 1.0 \cdot 0.002 = 0.501 $ $ P(\textrm{ heads } | z=\space\space5) = 0.5 \cdot 0.969 + 1.0 \cdot 0.031 = 0.516 $ $ P(\textrm{ heads } | z=10) = 0.5 \cdot 0.494 + 1.0 \cdot 0.506 = 0.753 $ Further Reading Plotting posteriors might give you insights you otherwise would have missed and it hints at even more. Bayes Rule works just as well for continuous functions, sums become integrals but the core idea remains the same. There are even neat hacks to make some of those calculations trivial.

Dealing with memory leaks in JavaScript applications can be a complex process. In this article I'm going to show you how to identify whether you have memory leaks, analyse them and ultimately resolve them. I'm using an AngularJS application to demonstrate the concepts and approaches, but much of this material