## Description

**An Introduction to Statistical Learning** provides an accessible overview of the field of statistical learning, an essential toolset for making sense of the vast and complex data sets that have emerged in fields ranging from biology to finance to marketing to astrophysics in the past twenty years. This book presents some of the most important modeling and prediction techniques, along with relevant applications. Topics include linear regression, classification, resampling methods, shrinkage approaches, tree-based methods, support vector machines, clustering, and more. Color graphics and real-world examples are used to illustrate the methods presented. Since the goal of this textbook is to facilitate the use of these statistical learning techniques by practitioners in science, industry, and other fields, each chapter contains a tutorial on implementing the analyses and methods presented in R, an extremely popular open source statistical software platform.

Two of the authors co-wrote The Elements of Statistical Learning (Hastie, Tibshirani and Friedman, 2nd edition 2009), a popular reference book for statistics and machine learning researchers. **An Introduction to Statistical Learning** covers many of the same topics, but at a level accessible to a much broader audience. This book is targeted at statisticians and non-statisticians alike who wish to use cutting-edge statistical learning techniques to analyze their data. The text assumes only a previous course in linear regression and no knowledge of matrix algebra.

stephen lung–Amazing book! A great intro to ML and statistical learning with some solid, clear and practical examples. Some of the concepts introduced appear so simple to the human mind, but getting the machine to learn these concepts is a whole different science. This book made me appreciate the wonders of ML. It also reinforced the notion that vast industries will be revolutionized, it is just a matter of time. In this book alone, I learned about the different techniques in supervised learning and unsupervised learning algorithms (ie. Bootstrap, bagging, random forest, boosting).

– Bootstrap: technique to treat the sample as a population and repeatedly drawing new samples

– Bagging: averaging the resulting predictions from X # of decision trees

– Random Forest: As decision trees are split, only strong predictors are kept forcing the average to be less variable

– Boosting: Each decision tree is built using information from old trees, so all the trees are a modified version of the initial dataset

It’s my first step into data science but won’t be my last on it. Looking forward to deepening my knowledge into this. Reading the book + watching the ISL youtube vids helped significantly in understanding the concepts.

afloatingpoint–Excellent book!

The book explains concepts of Statistical Learning from the very beginning. The core ideas such as bias-variance tradeoff are deeply discussed and revisited in many problems. The included R examples are particularly helpful for beginners to learn R. The book also provides a brief, but concise description of functions’ parameters for many related R packages.

My professor thinks this book is a “superficial” version of The Elements of Statistical Learning, but I disagree. Yes, it may be easy for the reader to understand, but isn’t it true that a great educator is someone who can explain complicated concepts in simple terms? There should be no shame in reading such a book, one that does a wonderful job of breaking things down.

If one wishes to learn more about a particular topic, I’d recommend The Element of Statistical Learning. These two pair nicely together.