Chevron Left
Zurück zu Machine Learning: Classification

Kursteilnehmer-Bewertung und -Feedback für Machine Learning: Classification von University of Washington

4.7
Sterne
3,523 Bewertungen
586 Bewertungen

Über den Kurs

Case Studies: Analyzing Sentiment & Loan Default Prediction In our case study on analyzing sentiment, you will create models that predict a class (positive/negative sentiment) from input features (text of the reviews, user profile information,...). In our second case study for this course, loan default prediction, you will tackle financial data, and predict when a loan is likely to be risky or safe for the bank. These tasks are an examples of classification, one of the most widely used areas of machine learning, with a broad array of applications, including ad targeting, spam detection, medical diagnosis and image classification. In this course, you will create classifiers that provide state-of-the-art performance on a variety of tasks. You will become familiar with the most successful techniques, which are most widely used in practice, including logistic regression, decision trees and boosting. In addition, you will be able to design and implement the underlying algorithms that can learn these models at scale, using stochastic gradient ascent. You will implement these technique on real-world, large-scale machine learning tasks. You will also address significant tasks you will face in real-world applications of ML, including handling missing data and measuring precision and recall to evaluate a classifier. This course is hands-on, action-packed, and full of visualizations and illustrations of how these techniques will behave on real data. We've also included optional content in every module, covering advanced topics for those who want to go even deeper! Learning Objectives: By the end of this course, you will be able to: -Describe the input and output of a classification model. -Tackle both binary and multiclass classification problems. -Implement a logistic regression model for large-scale classification. -Create a non-linear model using decision trees. -Improve the performance of any model using boosting. -Scale your methods with stochastic gradient ascent. -Describe the underlying decision boundaries. -Build a classification model to predict sentiment in a product review dataset. -Analyze financial data to predict loan defaults. -Use techniques for handling missing data. -Evaluate your models using precision-recall metrics. -Implement these techniques in Python (or in the language of your choice, though Python is highly recommended)....

Top-Bewertungen

SM
14. Juni 2020

A very deep and comprehensive course for learning some of the core fundamentals of Machine Learning. Can get a bit frustrating at times because of numerous assignments :P but a fun thing overall :)

SS
15. Okt. 2016

Hats off to the team who put the course together! Prof Guestrin is a great teacher. The course gave me in-depth knowledge regarding classification and the math and intuition behind it. It was fun!

Filtern nach:

376 - 400 von 554 Bewertungen für Machine Learning: Classification

von Muhammad H S

2. Nov. 2016

Excellent

von Joshua C

3. Mai 2017

Awesome!

von Roberto E

1. März 2017

awesome!

von Isura N

28. Dez. 2017

Hoooray

von Anshumaan K P

11. Nov. 2020

NYC ;)

von Shashidhar Y

2. Apr. 2019

Nice!!

von Md. T U B

2. Sep. 2020

great

von Subhadip P

4. Aug. 2020

great

von Nicholas S

7. Okt. 2016

Great

von 李真

5. März 2016

great

von SAYANTAN N

28. Jan. 2021

good

von boulealam c

15. Dez. 2020

good

von Saurabh A

11. Sep. 2020

good

von SUJAY P

21. Aug. 2020

nice

von ANKAN M

16. Aug. 2020

nice

von Sadhiq A

19. Juni 2020

good

von AMARTHALURU N K

24. Nov. 2019

good

von RISHI P M

19. Aug. 2019

Good

von Akash G

10. März 2019

good

von xiaofeng y

5. Feb. 2017

good

von Kumiko K

5. Juni 2016

Fun!

von Arun K P

17. Okt. 2018

G

von Navinkumar

23. Feb. 2017

g

von MARIANA L J

12. Aug. 2016

The good:

-Good examples to learn the concepts

-Good organization of the material

-The assignments were well-explained and easy to follow-up

-The good humor and attitude of the professor makes the lectures very engaging

-All videolectures are small and this makes them easy to digest and follow (optional videos were large compared with the rest of the lectures but the material covered on those was pretty advanced and its length is justifiable)

Things that can be improved:

-In some of the videos the professor seemed to cruise through some of the concepts. I understand that it is recommended to take the series of courses in certain order but sometimes I felt we were rushing through the material covered

-I may be nitpicking here but I wish the professor used a different color to write on the slides (the red he used clashed horribly with some of the slides' backgrounds and made it difficult to read his observations)

Overall, a good course to take and very easy to follow if taken together with the other courses in the series.

von Hanif S

2. Juni 2016

Highly recommended course, looking under the hood to examine how popular ML algorithms like decision trees and boosting are actually implemented. I'm surprised at how intuitive the idea of boosting really is. Also interesting that random forests are dismissed as not as powerful as boosting, but I would love to know why! Both methods appear to expose more data to the learner, and a heuristic comparison between RF and boosting would have been greatly appreciated.

One can immediately notice the difference between statistician Emily, who took us through the mathematical derivation of the derivative (ha.ha.) function for linear regression (much appreciated Emily!), and computer scientist Carlos, who skipped this bit for logistic regression but provided lots of verbose code to track the running of algorithms during assignments (helps to see what is actually happening under the hood). Excellent lecturers both, thank you!