It's all in the data (image compression)https://shankarmsy.github.io/enMon, 26 Jan 2015 12:50:58 GMThttp://getnikola.com/http://blogs.law.harvard.edu/tech/rssVisually differentiating PCA and Linear Regressionhttps://shankarmsy.github.io/posts/pca-vs-lr.htmlShankar Muthuswamy<div tabindex="-1" id="notebook" class="border-box-sizing">
<div class="container" id="notebook-container">
<div class="cell border-box-sizing text_cell rendered">
<div class="prompt input_prompt">
</div>
<div class="inner_cell">
<div class="text_cell_render border-box-sizing rendered_html">
<p>I've always been fascinated by the concept of PCA. Considering its wide range of applications and how inherently mathematical the idea is, I feel PCA is one of the pillars of the intersection between Pure Mathematics and Real-world analytics. Besides, the fact that you could think about real data as just raw numbers and then transform it down to something you can visualize and relate to, is extremely powerful and essential in any learning process.</p>
<p>Just in case you're wondering, Principle Component Analysis (PCA) simply put is a dimensionality reduction technique that can find the combinations of variables that explain the most variance. So you can transform a 1000-feature dataset into 2D so you can visualize it in a plot or you could bring it down to x features where x<<1000 while preserving most of the variance in the data. I've previously explored <a href="https://shankarmsy.github.io/posts/pca-sklearn.html">Facial image compression and reconstruction using PCA</a> using scikit-learn.</p>
<p>In this post I would like to delve into the concept of linearity in Principal Component Analysis.</p><p><a href="https://shankarmsy.github.io/posts/pca-vs-lr.html">Read more…</a> (6 min remaining to read)</p></div></div></div></div></div>dimensionality reductionimage compressionlinear regressionpcascikit-learnunsupervised learninghttps://shankarmsy.github.io/posts/pca-vs-lr.htmlFri, 12 Dec 2014 17:01:00 GMTFacial Image Compression and Reconstruction with PCAhttps://shankarmsy.github.io/posts/pca-sklearn.htmlShankar Muthuswamy<div tabindex="-1" id="notebook" class="border-box-sizing">
<div class="container" id="notebook-container">
<div class="cell border-box-sizing text_cell rendered">
<div class="prompt input_prompt">
</div>
<div class="inner_cell">
<div class="text_cell_render border-box-sizing rendered_html">
<p>Principle Component Analysis (PCA) is a dimension reduction technique that can find the combinations of variables that explain the most variance. In this post I will demonstrate dimensionality reduction concepts including facial image compression and reconstruction using PCA.</p>
<p>Let's get started. </p><p><a href="https://shankarmsy.github.io/posts/pca-sklearn.html">Read more…</a> (7 min remaining to read)</p></div></div></div></div></div>dimensionality reductionimage compressionpcascikit-learnunsupervised learninghttps://shankarmsy.github.io/posts/pca-sklearn.htmlWed, 12 Nov 2014 18:04:24 GMT