The k-nearest neighbor (k-NN) graph conveys local geometry of points in a sample. This attribute has resulted in a wide variety of machine learning applications for k-NN graphs, for e.g., density estimation, manifold learning and non-parametric classification. For samples with finite support, our analysis shows that k-NN density estimators behave differently in the interior of the support as opposed to near the boundary of the support. Motivated by our analysis, we propose improving the behavior of k-NN graphs by thinning its edges near the boundary. We illustrate the advantages of such boundary corrected k-NN graphs for entropy estimation and classification.