Chevron Left
返回到 Natural Language Processing with Sequence Models

学生对 deeplearning.ai 提供的 Natural Language Processing with Sequence Models 的评价和反馈

4.5
845 个评分
169 条评论

课程概述

In Course 3 of the Natural Language Processing Specialization, you will: a) Train a neural network with GLoVe word embeddings to perform sentiment analysis of tweets, b) Generate synthetic Shakespeare text using a Gated Recurrent Unit (GRU) language model, c) Train a recurrent neural network to perform named entity recognition (NER) using LSTMs with linear layers, and d) Use so-called ‘Siamese’ LSTM models to compare questions in a corpus and identify those that are worded differently but have the same meaning. By the end of this Specialization, you will have designed NLP applications that perform question-answering and sentiment analysis, created tools to translate languages and summarize text, and even built a chatbot! This Specialization is designed and taught by two experts in NLP, machine learning, and deep learning. Younes Bensouda Mourri is an Instructor of AI at Stanford University who also helped build the Deep Learning Specialization. Łukasz Kaiser is a Staff Research Scientist at Google Brain and the co-author of Tensorflow, the Tensor2Tensor and Trax libraries, and the Transformer paper....

热门审阅

SA
Sep 27, 2020

Overall it was great a course. A little bit weak in theory. I think for practical purposes whatever was sufficient. The detection of Question duplication was a very much cool model. I enjoy it a lot.

AU
Nov 11, 2021

This is the third course of NLP Specialization. This was a great course and the instructors was amazing. I really learned and understand everything they thought like LSTM, GRU, Siamese Networks etc.

筛选依据:

151 - Natural Language Processing with Sequence Models 的 175 个评论(共 175 个)

创建者 J N B P

Mar 16, 2021

This course is good for practical knowledge with really good projects but it lags in the theoretical part you must be familiar with the concepts to get the most out of this course.

创建者 Nguyen B L

Jul 5, 2021

I am now confusing by too many Deep learning framework. Also the content is somehow repeated with the Deep learning specialization.

创建者 shinichiro i

Apr 24, 2021

I just want them to use Keras, since I have no inclination to study new shiny fancy framework such as Trax.

创建者 martin k

Apr 26, 2021

Lectures are quite good, but assignments are really bad. Not helpful at all

创建者 Deleted A

Jan 3, 2021

assignments were easy and similar.learned less than expected.

创建者 Alberto S

Nov 1, 2020

Content is interesting, but some details are under explained.

创建者 Ashim M M I T a A S

Nov 22, 2020

Would've been better with a better documented library.

创建者 Mahsa S

Mar 26, 2021

I prefer to learn more about nlp in pytorch

创建者 Leon V

Sep 28, 2020

Grader output could be more useful.

创建者 Vincent R

Jan 12, 2022

Very superficial information about different types of neural networks and their uses. Use of Trax makes it nearly impossible to Google anything helpful - a lot of the assignments just tell you to read the documentation. To finish the assignments, you can basically copy-paste the code you're given to set up Trax neural networks and generators without having any idea what you're doing (because, again, the content doesn't go over it).

For example, Week 1 in this course introduces you to Trax (can't be run on Windows), covers aspects of object-oriented programming, and talks at a very high level about how to do things in Trax before moving onto a cursory discussion of generators. Then the assignment has you increment counters, set limits in FOR loops, copy negative-sentiment code from positive-sentiment code that's already completed, and fill in some code that's basically given right above where you write it.

Overall, any code you write is usually very simple but often easy to get wrong because of the lack of direction, e.g. model(x,y) vs model((x,y)). The discussion boards are invaluable because the mistake might have been 3 functions earlier that the built-in tests didn't catch. It feels like a good effort was put in all around to establish this course, but it feels like a first draft course that was never updated.

创建者 Kota M

Aug 21, 2021

Sadly, the quality of the material is much lower than the previous two courses. Assignment repeatedly asks to implement data generators with a lot of for-loops. We should focus more on the network architecture rather than python programming. That being said, the implementation is not good either. Learners would have to learn to program anyways.

创建者 Patrick C

Dec 23, 2020

Assignments are very difficult to complete because of inaccurate information (off by one on indices and other sloppy mistakes). You also don't learn much from them because almost all the code is already provided. It would be much better if they built up your understand from first principles instead of rushing through fill in the blank problems.

创建者 Mostafa E

Dec 13, 2020

The course did well in explaining the concepts of RNNs... but it may in fact have provided less knowledge than the NLP course in Deep Learning specialization.

I was looking forward to see more details on how translation works using LSTMs, go over some famous LSTM networks such as GNMT, and explain some accuracy measures such as the BLEU score.

创建者 Greg D

Dec 24, 2020

Spends a lot of time going over tedious implementation details rather than teaching interesting NLP topics and nuances, especially in the assignments. Introduction to Trax seems to be the only saving grace, one bonus star :)))).

For having Andrew Ng's course as suggested background for this course this is a big step (read as fall) down.

创建者 Artem R

Dec 1, 2020

Course could be completed without watching videos - just by using hints and comments in assignments, videos are short and shallow, choice of Deep Learning framework (TRAX) is questionable - I won't use it in production.

Despite the course is 4 weeks long it could be accomplished in 4 days - I don't feel that it was worth the time.

创建者 Miguel Á C T

Mar 19, 2021

The course is good as an example of code that executes tasks correctly; that is, you can see how neural networks are defined and used in Trax. However, from a pedagogical point of view, I find it quite weak. Concepts are poorly explained and notebooks consist of little more than copying and pasting previously displayed code.

创建者 George L

Mar 21, 2021

Compared with the Deep Learning specialization, this specialization was designed in a way that nobody can understand. Although the assignment could be easy at times, the point is being missed when people cannot really understand and learn. Bad teacher. Andrew Ng, please!

创建者 Youran W

Dec 4, 2020

All the assignments are extremely similar.

创建者 Xinlong L

Aug 22, 2021

I did not enjoy the course at all. It looks like the instructor is just reading materials rather than really teaching. He just focused on reading and did not explain anything. I took Andrew's deep learning specialization, and that course was really great. But I am so disappointed at this course. deeplearning.ai please do strict quality control on the courses otherwise it harms your brand

创建者 Yanting H

Oct 13, 2020

Oversimplified illustration of all core definitions and it is not reasonable from any sense to use trax instead of a popular framework like Tensorflow or Pytorch for the assignment. Also, the design of assignment is weak, you can barely learn anything from filling the blanks.

创建者 Ngacim

Nov 27, 2020

1) The course videos just throw out various nouns and you need to goole them to understand what they mean.

2) The assignments try their best to explain concepts in a way that often seems redundant.

创建者 Emanuel D

Jan 26, 2021

For me, it is very dissapointing, time is spent on irrelevant things, like python syntax and generators in first week. There are missing video tutorials on how to use Trax.

创建者 Siddharth S

Sep 19, 2021

Hard to follow explanations and TRAX absolutely made it super hard to learn and follow.

创建者 Alice M

Nov 15, 2020

No mentors were available or contactable during this course

创建者 Nicolás E C R

Dec 23, 2020

very superficial