本人大学本科，对机器学习很感兴趣，想从事这方面的研究。在网上看到机器学习有一些经典书如Bishop的PRML， Tom Mitchell的machine learning，还有pattern classification，不知该如何入门？哪本书比较容易理解？
我要翻译一把quora了（Quora - The best answer to any question），再加点我的理解，我相信会是一个好答案。
1. Python/C++/R/Java - you will probably want to learn all of these languages at some point if you want a job in machine-learning. Python's Numpy and Scipy libraries are awesome because they have similar functionality to MATLAB, but can be easily integrated into a web service and also used in Hadoop (see below). C++ will be needed to speed code up. R is great for statistics and plots, and Hadoop is written in Java, so you may need to implement mappers and reducers in Java (although you could use a scripting language via Hadoop streaming )
2. Probability and Statistics: A good portion of learning algorithms are based on this theory. Naive Bayes, Gaussian Mixture Models , Hidden Markov Models , to name a few. You need to have a firm understanding of Probability and Stats to understand these models. Go nuts and study measure theory . Use statistics as an model evaluation metric: confusion matrices, receiver-operator curves, p-values, etc.
我推荐统计学习方法 李航写的，这算的上我mentor的mentor了。理解一些概率的理论，比如贝叶斯，SVM，CRF，HMM，决策树，AdaBoost，逻辑斯蒂回归，然后再稍微看看怎么做evaluation 比如P R F。也可以再看看假设检验的一些东西。
3. Applied Math + Algorithms: For discriminate models like SVMs , you need to have a firm understanding of algorithm theory. Even though you will probably never need to implement an SVM from scratch, it helps to understand how the algorithm works. You will need to understand subjects like convex optimization , gradient decent , quadratic programming , lagrange , partial differential equations , etc. Get used to looking at summations .
4. Distributed Computing: Most machine learning jobs require working with large data sets these days (see Data Science) . You cannot process this data on a single machine, you will have to distribute it across an entire cluster. Projects like Apache Hadoop and cloud services like Amazon's EC2 makes this very easy and cost-effective. Although Hadoop abstracts away a lot of the hard-core, distributed computing problems, you still need to have a firm understanding of map-reduce , distribute-file systems , etc. You will most likely want to check out Apache Mahout and Apache Whirr .
5. Expertise in Unix Tools: Unless you are very fortunate, you are going to need to modify the format of your data sets so they can be loaded into R,Hadoop,HBase,etc. You can use a scripting language like python (using re) to do this but the best approach is probably just master all of the awesome unix tools that were designed for this: cat , grep , find , awk , sed , sort , cut, tr , and many more. Since all of the processing will most likely be on linux-based machine (Hadoop doesnt run on Window I believe), you will have access to these tools. You should learn to love them and use them as much as possible. They certainly have made my life a lot easier. A great example can be found here .
6. Become familiar with the Hadoop sub-projects: HBase, Zookeeper, Hive , Mahout, etc. These projects can help you store/access your data, and they scale.
机器学习终究和大数据息息相关，所以Hadoop的子项目要关注，比如HBase Zookeeper Hive等等
7. Learn about advanced signal processing techniques: feature extraction is one of the most important parts of machine-learning. If your features suck, no matter which algorithm you choose, your going to see horrible performance. Depending on the type of problem you are trying to solve, you may be able to utilize really cool advance signal processing algorithms like: wavelets , shearlets , curvelets, contourlets, bandlets . Learn about time-frequency analysis , and try to apply it to your problems. If you have not read about Fourier Analysis and Convolution[, you will need to learn about this stuff too. The ladder is signal processing 101 stuff though.
Finally, practice and read as much as you can. In your free time, read papers like Google Map-Reduce, Google File System, Google Big Table , The Unreasonable Effectiveness of Data ,etc There are great free machine learning books online and you should read those also. Here is an awesome course I found and re-posted on github. Instead of using open source packages, code up your own, and compare the results. If you can code an SVM from scratch, you will understand the concept of support vectors, gamma, cost, hyperplanes, etc. It's easy to just load some data up and start training, the hard part is making sense of it all.
原文发布于微信公众号 - 大数据挖掘DT数据分析（datadw）