Mar 05, 2018
Capstone did provide a true test of Data Analytics skills. Its like a being left alone in a jungle to survive for a month. Either you succumb to nature or come out alive with a smile and confidence.
Mar 29, 2017
Wow i finally managed to finish the specialization!! definitely learned a lot and also found out difficulties in building predictors by trying to balancing speed, accuracy and memory constraints!!!
von Ajay K P•
Mar 30, 2018
I really had fun working on this project.
von Artem V•
Sep 14, 2017
Nice balance of focused and open-ended
von Gary B•
Sep 15, 2017
tough capstone and took a lot of time
von Yew C C•
Jul 20, 2016
Good and interesting project.
von Tiberiu D O•
Sep 22, 2017
von Sabawoon S•
Nov 25, 2017
von Filipe R•
Oct 07, 2018
von Kevin M•
Jan 15, 2018
von Richard I C•
Jul 19, 2016
As a capstone to a series of courses that covered data science and R, I found this one to be a bit lacking. There was no involvement from the professors at JHU or the folks at SwiftKey. As was mentioned in another review, the course feels abandoned. All you get a few short (two minutes or so) videos that give you little in the way of instruction or direction. Basically, they just say, "Go do this. Good luck!"
There were also no Mentors or TAs to guide students or answer questions. It was the students helping each other through the forums. Sometimes it was helpful and everyone involved learned something. Other times, it was the blind trying to lead the visually-impaired.
On a positive note, you will use all of the skills from the previous courses: writing R functions, performing exploratory analysis and publishing it via RPubs. Your final product will be displayed for everyone via ShinyApps and a presentation using R Presentation (also published via RPubs).
On a(nother) negative note, the topic of Natural Language Processing is not an easy one to just walk into and feel confident in providing a working next-word prediction algorithm in about eight (8) weeks. You're reading academic journal articles, watching multiple videos from another Coursera course (which actually focuses on the topic of NLP, and takes place over several courses and several months!).
Supposedly, there is work going on to update the course, so hopefully future students will get a better experience. I did take a bit away from this course, especially since I made more than one attempt to complete it. However, it was definitely a shock to find myself missing those things that one typically finds in a learning environment -- descriptive background, assistance to problems, etc. -- and seeing that I was for all intents and purposes on my own. Even in the professional world of data analysis, I have never experienced the lack of support that I found in this course.
With that, I am giving it three (3) stars. As I said, I did learn a bit, but it was a bit of a struggle that required multiple attempts to complete. This would have been better off as a stand alone topic (which it already is by another Coursera affiliated school), or having a capstone course that builds on a topic more in the wheelhouse of the JHU professors: a capstone project focusing on bioinformatics or biostatistics would have been amazing in comparison to this.
von David M•
Jul 21, 2016
This was essentially a self-study project with some social peers. The topic, approach, and standards were different from all of the other units in the Data Science specialization. I found the other units more enjoyable.
Learning the essentials of NLP quickly is necessary to begin the project. I ordered a textbook, for example, and I was fortunate that it arrived quickly. If NLP is a prerequisite for this capstone project - whether in the form of a prior class or textbook knowledge - this should be indicated clearly on the course description page.
Nevertheless, the main learning that I achieved with this course was in the area of software engineering - specifically, how to take advantage of vectorization in R to achieve reasonable computing performance. While this is a valuable skill, it doesn't seem the proper focus of a capstone course in a sequence focused primarily on other topics.
As noted elsewhere in these comments, there was a complete absence of any traditional teaching support. Learning outcomes suffered as result. The missing resources included instructors, mentors, partners, and learning materials.
The course site notes an expected time requirement of a few hours per week. My commitment was 20 hours per week, under some pressure. Numerous students take this "course" multiple time, in order to arrange for reasonable software development time.
Producing working software was fun, as it always is. The course learner community was supportive, which is fortunately typical for Coursera.
All in all, this project was *not* an effective capstone for the Data Science specialization. The project was interesting in its way, but it felt 'parachuted in' to this learning sequence.
von Guilherme B D J•
Mar 24, 2017
The main reason for my rating is because the course is so "loose" on what your are supposed to achieve incrementally every week that it can lead to some hard situations.
Just to give my example: the first week was piece of cake and I didn't feel like it really contribute for the following weeks. Then, I was struggling with the suggested library (tm) until I got support through the discussion forums and someone suggested me to use quanteda.
Then thinks started to run smoothly, or so I thought. When implementing the language model (which, at first, I thought was supposed to be KBO), I got stuck for a long period. Not because my model was wrong (I was able to implement it and to check it against some hand-written and proved examples - which I should probably thank again), but because I was not able to make it run efficiently enough for the given constraints.
Being stuck in this stage for longer than I wanted, I had to sacrifice another important steps of data analysis pipeline in order to not jeopardize my final delivery by not meeting the final due date. I know that this is exactly what will happen in the "real" life, but I think that some better guidance could guarantee the students spent a more even amount of time in across all steps.
All things considered, I think the Capstone was really interesting and likely took more than the 4-9 hours per week, but most of this is probably because of the problems I faced.
I believe that with a better guidance on the paths to follow or maybe some suggested libraries to use, a lot of "noise" (useless difficulty) could be removed and this course would definitely get more starts.
von John D M•
Sep 20, 2019
A capstone is typically defined as integrating key material from a course. This capstone did not require material from key courses, specifically the machine learning, regression models, and statistical inference courses. That was a great shame. Instead, it threw us into a completely new area, Natural Language Processing.
There were many complaints about that, and I agree. However, it was a challenging task to explore an area in data science we didn't touch on, and challenging in terms of the programming and enormous data file sizes. In that sense it was probably good prep for unexpected challenges in the workplace and therefore good training to make us real data scientists. Still, I would like to see the capstone rejigged to include material from the missing courses. As for NLP, some students claim it is not a useful area to study, but in my case it is exactly the right thing for me to study as I work with analyzing user queries in the form of tickets in a CRM. I found it especially trying to try to integrate some material such as Kneser-Ney theory and opted for a more basic approach. My learning experience would have been better with some proper instruction in that area.
von Diego C G•
Apr 13, 2016
Could be better. The teacher sometimes explain the concepts in a hard way, and not always shows how to do in practice.
But you will get curious and in case of doubts, you can find more simple explanations on the web, and the forum is very good.
The assignments are hard, you will need do research to accomplish then, but is the best way to learn.
I think the specialization is good to someone without much knowledge on the field (like me). But it's only the start!
von Andrew S•
Jun 26, 2017
I felt this course was the weakest of the series. The capstone focuses on building an NLP application, which although I find interesting, does not make for a good final problem as NLP was not really covered in the specialization and NLP is particularly challenging in R. That said, the series as a whole is well worth the time and effort.
von Antonio E C•
Dec 30, 2016
It's been a challenge to learn all these new concepts and package them into a working product in such a small period of time. I am glad of the things I learned. Also, in my opinion the materials / resources given to this course are scarce compared with previous courses of the specialization.
von Matias T•
Jul 18, 2016
Hi, the prject was nice and at the end I learned some new things, but it didn't have people to provide any guide. In the videos it was said that personal from SwiftKey will be there as well as JHU teachers could provide some insights. It looked a bit like a phantom course
von Hang Y•
Feb 10, 2018
It's an inspiring project in the field of NLP, however, the major concern is that this topic and the corresponding skills have never been introduced before the capstone project.
von Rajib K•
Sep 04, 2017
I would say, if we could introduce a capstone project more related to the first
von Max D•
Aug 19, 2019
NLP module should definitely be included into JHU Data Science specialization.
von Michael N•
Jan 13, 2018
Had to learn a lot on our own but very valuable content once acquired.
von Pradnya C•
Apr 14, 2016
Most stressful but interesting. Not enough material was provided
von Adam B•
Jun 06, 2016
I liked every course in this specialization except
von Tracy S•
Nov 28, 2016
it could've given more instructions!
von Jeffrey G•
Jan 17, 2018
With the exception of R Shiny programming, there was nothing about this course that required any real knowledge of anything in any course of the JHU Data Science certificate track. Why do you ask? Well, most of the class was just about learning natural language processing (NLP), which wasn't covered. What about R programming, you ask? Most of the NLP packages in R that I tested out couldn't process a 200MB text file in a reasonable amount of time or with a reasonable memory footprint. I ran Python and R programs in parallel to do sentence and word tokenization, and Python's nltk was (not exaggerating) 100x faster than R's NLP package, and R's tm package took 4GB of memory to parse the same 200MB corpus. In 2018, that's just unacceptable. There's no way you could ever write production-quality NLP code using these R packages. After the course was finished, someone pointed out an R package that could adequately accomplish the task, but by then it was far too late. Even R's basic data structures themselves weren't up to the challenge. I ended up building my model in Python, exporting it as JSON, and then importing that into my Shiny app. Comparing basic data structures in Python and R to represent the same JSON file (i.e., just read in the file and measure the size of the resulting object), R's list was nearly 2x as large in RAM than Python's dict. All of this combined with really very little reference to most of the material in the other nine classes in this track left me very disappointed. The reason I gave the class two stars and not one was because what we did learn about NLP was useful. Having to solve a gnarly, real-world problem starting from raw data is useful. Having to write an app with actual users interacting with it is useful. But could just about everything about this class have been done a lot better? Yes. I think a machine learning project that tied together everything that we'd worked on up until this point would have been a lot more fun and rewarding.
von Michael S•
Jul 02, 2016
Of all the offerings in the specialization, this one felt like it was thrown together in less than hour. I expected to have to learn quite a bit of material on my own, but even the references to additional materials were very thin.
I could have saved many days if more guidance on the project workflow would have been given. The pre-processing of the data was quite extensive (9 steps before generating the ngram tables I used in my model) and was the key to getting decent results IMHO, but one had to step on a quite a few landmines to figure this out.
The problem was an interesting one and I ended up reworking it after passing with 95% (the only class in the specialization I didn't get 100% on) because I didn't have time to implement much of what I had to figure out by 'hard-knocks'