What is parts of speech technique in sentiment analysis?

Multi tool use
Clash Royale CLAN TAG#URR8PPP
up vote
6
down vote
favorite
In an article, I saw Sentiment Analysis using Parts Of Speech(POS) technique. When I searched I got some paper on POS but I couldn't understand what POS basically is. Though I am new to sentiment analysis please help me to understand POS.
machine-learning sentiment-analysis
add a comment |Â
up vote
6
down vote
favorite
In an article, I saw Sentiment Analysis using Parts Of Speech(POS) technique. When I searched I got some paper on POS but I couldn't understand what POS basically is. Though I am new to sentiment analysis please help me to understand POS.
machine-learning sentiment-analysis
add a comment |Â
up vote
6
down vote
favorite
up vote
6
down vote
favorite
In an article, I saw Sentiment Analysis using Parts Of Speech(POS) technique. When I searched I got some paper on POS but I couldn't understand what POS basically is. Though I am new to sentiment analysis please help me to understand POS.
machine-learning sentiment-analysis
In an article, I saw Sentiment Analysis using Parts Of Speech(POS) technique. When I searched I got some paper on POS but I couldn't understand what POS basically is. Though I am new to sentiment analysis please help me to understand POS.
machine-learning sentiment-analysis
machine-learning sentiment-analysis
asked Sep 10 at 7:57
SRJ577
443
443
add a comment |Â
add a comment |Â
3 Answers
3
active
oldest
votes
up vote
9
down vote
accepted
Parts of Speech (POS)
This is what it is called when you label each of the words (often called tokens) of a sentence or many sentences. Usually they are labelled with grammatical descriptions, such as Noun, Adjective, Adverb. They can often get quite specific, also distinguishing e.g. between types of nouns (proper nouns etc).
You can then use these descriptions of the tokens as input to a model or to filter the tokens to extract only the parts you are interested in.
POS are usually parts of the output when we parse a block of text using an NLP toolkit, such as spaCy. Have a look here for their available POS.
Here is a snippet of parse tree of the sentence: Apple is looking at buying a UK startup for $1 billion.
Apple has been recognised as a proper noun (NNP
) as well as being the subject of the first verb (shown by the arrow labelled nsubj
).
For a nice introduction to POS among many other terms within NLP, check out this article..
Sentiment Analysis Perspective
There are many many reasons to include POS in a sentiment model (some examples below), but they really all boil down to one overarching reason: polysemy. The definition of which is:
the coexistence of many possible meanings for a word or phrase.
So essentially saying, that words in different contexts can have different meanings. This is of course a massive gain in information that we can pass to a model!
The word duck can be a noun (the bird) or a verb (the motion, to crouch down). If we can tell a model which one of these it is in a given sentence, the model can learn to make a lot more sense out of the sentence.
Beyond distinguishing between meanings of single words, we can also simply uses them on their usage, or placement. One example use would be to use the adverb: however.
If our parser is good enough to tell us that it used in a particular sentence as a contrasting conjunction (which technically, would be grammatically incorrect!). An example sentence could be:
I really love muffins, however, I hate strawberries.
We have two clauses: a positive one before however and one after. The first clause is positive, the latter negative. If we have a scale of -5
ro +5
for sentiment for each clause (perhaps the mean of each word in that clause) we could imagine scores such as +3
for the positive clause and -3
for the negative.
This is where I have seen some models (Vader, SentiStrength, etc.) using POS to scale those base scores. In our example, perhaps however would be used to increase the magnitude of the negative clause's score by 10%, giving it a final score of -3.3
. Whether or not that makes sense depends on the use case, the data and probably the developers general experiences.
Summary
There are many uses for POS, you can imagine quite a few, whether to hand-tailor a sentiment model of just to produce more features. In any case, it is a process that extracts more information from the original raw text, applying langage models (like grammar!) that have been tested and are known to be robust for any official form of writing.
You've missed why it's used for sentiment analysis. Not only does it detect to which noun phrase an adjective applies (or in more complex analysis, how two noun phrases are being compared), it also allows detecting the difference between e.g. the adjective "Nice" and the proper noun "Nice".
– OrangeDog
Sep 10 at 16:09
@OrangeDog - thanks for adding another use case. I made a similar point between Apple being an object noun (the fruit) and a proper noun (the company). There are many other use cases of POS, many of which can be found in the article I linked.
– n1k31t4
Sep 10 at 16:32
Your example doesn't express any sentiment, so it's an odd choice.
– OrangeDog
Sep 10 at 16:36
I will edit it to include more specific use cases.
– n1k31t4
Sep 10 at 16:38
OrangeDog & n1k31t4 guys thanks for your valuable suggestions.
– SRJ577
Sep 11 at 10:34
 |Â
show 2 more comments
up vote
3
down vote
Parts of Speech explains how a word is used in a sentence, i.e whether it is a verb, noun, adjective and so on.
In text processing, those POS (or word classes) are usually represented as their abbreviation and we call it tag
.
For example if we use nltk
, it uses The Penn Treebank tagset as a default.
https://www.ling.upenn.edu/courses/Fall_2003/ling001/penn_treebank_pos.html
import nltk
nltk.pos_tag(['I', 'like', 'playing', 'tennis'])
It will ouput:
[('I', 'PRP'), ('like', 'VBP'), ('playing', 'VBG'), ('tennis', 'NN')]
We can check nltk.help.upenn_tagset()
, and there we know that:
PRP : Personal Pronoun
VBP : Verb, non-3rd person singular present
VBG : Verb, gerund or present participle
NN : Noun, singular or mass
This answer does not mention any relationship between POS and sentiment analysis.
– n1k31t4
Sep 11 at 23:08
Both answers helped me, so I tried to mark both as correct answers, but it's not allowed here. It's my fault and sry for that.
– SRJ577
Sep 12 at 5:52
add a comment |Â
up vote
0
down vote
POS can be used in multiple application in text analytics. The majority of the techniques in Text Analytics work on tokenisation and N grams( break down of sentence into words). In most of the case, semantics of the text is lost as sentences are break down into words and standalone words cannot express emotions and semantics as compare to group of words or sentences. So by tagging each word in the corpus to its parts of speech makes sometimes easy to get the context in which the word is used and ultimately used in analyzing the sentiments.
I tried Text Blob and NLTK package in Python for text analytics. Refer to the below link for more information on usage of these packages.
https://www.nltk.org/
https://pythonprogramming.net/tokenizing-words-sentences-nltk-tutorial/
https://textblob.readthedocs.io/en/dev/quickstart.html
https://textblob.readthedocs.io/en/dev/
add a comment |Â
3 Answers
3
active
oldest
votes
3 Answers
3
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
9
down vote
accepted
Parts of Speech (POS)
This is what it is called when you label each of the words (often called tokens) of a sentence or many sentences. Usually they are labelled with grammatical descriptions, such as Noun, Adjective, Adverb. They can often get quite specific, also distinguishing e.g. between types of nouns (proper nouns etc).
You can then use these descriptions of the tokens as input to a model or to filter the tokens to extract only the parts you are interested in.
POS are usually parts of the output when we parse a block of text using an NLP toolkit, such as spaCy. Have a look here for their available POS.
Here is a snippet of parse tree of the sentence: Apple is looking at buying a UK startup for $1 billion.
Apple has been recognised as a proper noun (NNP
) as well as being the subject of the first verb (shown by the arrow labelled nsubj
).
For a nice introduction to POS among many other terms within NLP, check out this article..
Sentiment Analysis Perspective
There are many many reasons to include POS in a sentiment model (some examples below), but they really all boil down to one overarching reason: polysemy. The definition of which is:
the coexistence of many possible meanings for a word or phrase.
So essentially saying, that words in different contexts can have different meanings. This is of course a massive gain in information that we can pass to a model!
The word duck can be a noun (the bird) or a verb (the motion, to crouch down). If we can tell a model which one of these it is in a given sentence, the model can learn to make a lot more sense out of the sentence.
Beyond distinguishing between meanings of single words, we can also simply uses them on their usage, or placement. One example use would be to use the adverb: however.
If our parser is good enough to tell us that it used in a particular sentence as a contrasting conjunction (which technically, would be grammatically incorrect!). An example sentence could be:
I really love muffins, however, I hate strawberries.
We have two clauses: a positive one before however and one after. The first clause is positive, the latter negative. If we have a scale of -5
ro +5
for sentiment for each clause (perhaps the mean of each word in that clause) we could imagine scores such as +3
for the positive clause and -3
for the negative.
This is where I have seen some models (Vader, SentiStrength, etc.) using POS to scale those base scores. In our example, perhaps however would be used to increase the magnitude of the negative clause's score by 10%, giving it a final score of -3.3
. Whether or not that makes sense depends on the use case, the data and probably the developers general experiences.
Summary
There are many uses for POS, you can imagine quite a few, whether to hand-tailor a sentiment model of just to produce more features. In any case, it is a process that extracts more information from the original raw text, applying langage models (like grammar!) that have been tested and are known to be robust for any official form of writing.
You've missed why it's used for sentiment analysis. Not only does it detect to which noun phrase an adjective applies (or in more complex analysis, how two noun phrases are being compared), it also allows detecting the difference between e.g. the adjective "Nice" and the proper noun "Nice".
– OrangeDog
Sep 10 at 16:09
@OrangeDog - thanks for adding another use case. I made a similar point between Apple being an object noun (the fruit) and a proper noun (the company). There are many other use cases of POS, many of which can be found in the article I linked.
– n1k31t4
Sep 10 at 16:32
Your example doesn't express any sentiment, so it's an odd choice.
– OrangeDog
Sep 10 at 16:36
I will edit it to include more specific use cases.
– n1k31t4
Sep 10 at 16:38
OrangeDog & n1k31t4 guys thanks for your valuable suggestions.
– SRJ577
Sep 11 at 10:34
 |Â
show 2 more comments
up vote
9
down vote
accepted
Parts of Speech (POS)
This is what it is called when you label each of the words (often called tokens) of a sentence or many sentences. Usually they are labelled with grammatical descriptions, such as Noun, Adjective, Adverb. They can often get quite specific, also distinguishing e.g. between types of nouns (proper nouns etc).
You can then use these descriptions of the tokens as input to a model or to filter the tokens to extract only the parts you are interested in.
POS are usually parts of the output when we parse a block of text using an NLP toolkit, such as spaCy. Have a look here for their available POS.
Here is a snippet of parse tree of the sentence: Apple is looking at buying a UK startup for $1 billion.
Apple has been recognised as a proper noun (NNP
) as well as being the subject of the first verb (shown by the arrow labelled nsubj
).
For a nice introduction to POS among many other terms within NLP, check out this article..
Sentiment Analysis Perspective
There are many many reasons to include POS in a sentiment model (some examples below), but they really all boil down to one overarching reason: polysemy. The definition of which is:
the coexistence of many possible meanings for a word or phrase.
So essentially saying, that words in different contexts can have different meanings. This is of course a massive gain in information that we can pass to a model!
The word duck can be a noun (the bird) or a verb (the motion, to crouch down). If we can tell a model which one of these it is in a given sentence, the model can learn to make a lot more sense out of the sentence.
Beyond distinguishing between meanings of single words, we can also simply uses them on their usage, or placement. One example use would be to use the adverb: however.
If our parser is good enough to tell us that it used in a particular sentence as a contrasting conjunction (which technically, would be grammatically incorrect!). An example sentence could be:
I really love muffins, however, I hate strawberries.
We have two clauses: a positive one before however and one after. The first clause is positive, the latter negative. If we have a scale of -5
ro +5
for sentiment for each clause (perhaps the mean of each word in that clause) we could imagine scores such as +3
for the positive clause and -3
for the negative.
This is where I have seen some models (Vader, SentiStrength, etc.) using POS to scale those base scores. In our example, perhaps however would be used to increase the magnitude of the negative clause's score by 10%, giving it a final score of -3.3
. Whether or not that makes sense depends on the use case, the data and probably the developers general experiences.
Summary
There are many uses for POS, you can imagine quite a few, whether to hand-tailor a sentiment model of just to produce more features. In any case, it is a process that extracts more information from the original raw text, applying langage models (like grammar!) that have been tested and are known to be robust for any official form of writing.
You've missed why it's used for sentiment analysis. Not only does it detect to which noun phrase an adjective applies (or in more complex analysis, how two noun phrases are being compared), it also allows detecting the difference between e.g. the adjective "Nice" and the proper noun "Nice".
– OrangeDog
Sep 10 at 16:09
@OrangeDog - thanks for adding another use case. I made a similar point between Apple being an object noun (the fruit) and a proper noun (the company). There are many other use cases of POS, many of which can be found in the article I linked.
– n1k31t4
Sep 10 at 16:32
Your example doesn't express any sentiment, so it's an odd choice.
– OrangeDog
Sep 10 at 16:36
I will edit it to include more specific use cases.
– n1k31t4
Sep 10 at 16:38
OrangeDog & n1k31t4 guys thanks for your valuable suggestions.
– SRJ577
Sep 11 at 10:34
 |Â
show 2 more comments
up vote
9
down vote
accepted
up vote
9
down vote
accepted
Parts of Speech (POS)
This is what it is called when you label each of the words (often called tokens) of a sentence or many sentences. Usually they are labelled with grammatical descriptions, such as Noun, Adjective, Adverb. They can often get quite specific, also distinguishing e.g. between types of nouns (proper nouns etc).
You can then use these descriptions of the tokens as input to a model or to filter the tokens to extract only the parts you are interested in.
POS are usually parts of the output when we parse a block of text using an NLP toolkit, such as spaCy. Have a look here for their available POS.
Here is a snippet of parse tree of the sentence: Apple is looking at buying a UK startup for $1 billion.
Apple has been recognised as a proper noun (NNP
) as well as being the subject of the first verb (shown by the arrow labelled nsubj
).
For a nice introduction to POS among many other terms within NLP, check out this article..
Sentiment Analysis Perspective
There are many many reasons to include POS in a sentiment model (some examples below), but they really all boil down to one overarching reason: polysemy. The definition of which is:
the coexistence of many possible meanings for a word or phrase.
So essentially saying, that words in different contexts can have different meanings. This is of course a massive gain in information that we can pass to a model!
The word duck can be a noun (the bird) or a verb (the motion, to crouch down). If we can tell a model which one of these it is in a given sentence, the model can learn to make a lot more sense out of the sentence.
Beyond distinguishing between meanings of single words, we can also simply uses them on their usage, or placement. One example use would be to use the adverb: however.
If our parser is good enough to tell us that it used in a particular sentence as a contrasting conjunction (which technically, would be grammatically incorrect!). An example sentence could be:
I really love muffins, however, I hate strawberries.
We have two clauses: a positive one before however and one after. The first clause is positive, the latter negative. If we have a scale of -5
ro +5
for sentiment for each clause (perhaps the mean of each word in that clause) we could imagine scores such as +3
for the positive clause and -3
for the negative.
This is where I have seen some models (Vader, SentiStrength, etc.) using POS to scale those base scores. In our example, perhaps however would be used to increase the magnitude of the negative clause's score by 10%, giving it a final score of -3.3
. Whether or not that makes sense depends on the use case, the data and probably the developers general experiences.
Summary
There are many uses for POS, you can imagine quite a few, whether to hand-tailor a sentiment model of just to produce more features. In any case, it is a process that extracts more information from the original raw text, applying langage models (like grammar!) that have been tested and are known to be robust for any official form of writing.
Parts of Speech (POS)
This is what it is called when you label each of the words (often called tokens) of a sentence or many sentences. Usually they are labelled with grammatical descriptions, such as Noun, Adjective, Adverb. They can often get quite specific, also distinguishing e.g. between types of nouns (proper nouns etc).
You can then use these descriptions of the tokens as input to a model or to filter the tokens to extract only the parts you are interested in.
POS are usually parts of the output when we parse a block of text using an NLP toolkit, such as spaCy. Have a look here for their available POS.
Here is a snippet of parse tree of the sentence: Apple is looking at buying a UK startup for $1 billion.
Apple has been recognised as a proper noun (NNP
) as well as being the subject of the first verb (shown by the arrow labelled nsubj
).
For a nice introduction to POS among many other terms within NLP, check out this article..
Sentiment Analysis Perspective
There are many many reasons to include POS in a sentiment model (some examples below), but they really all boil down to one overarching reason: polysemy. The definition of which is:
the coexistence of many possible meanings for a word or phrase.
So essentially saying, that words in different contexts can have different meanings. This is of course a massive gain in information that we can pass to a model!
The word duck can be a noun (the bird) or a verb (the motion, to crouch down). If we can tell a model which one of these it is in a given sentence, the model can learn to make a lot more sense out of the sentence.
Beyond distinguishing between meanings of single words, we can also simply uses them on their usage, or placement. One example use would be to use the adverb: however.
If our parser is good enough to tell us that it used in a particular sentence as a contrasting conjunction (which technically, would be grammatically incorrect!). An example sentence could be:
I really love muffins, however, I hate strawberries.
We have two clauses: a positive one before however and one after. The first clause is positive, the latter negative. If we have a scale of -5
ro +5
for sentiment for each clause (perhaps the mean of each word in that clause) we could imagine scores such as +3
for the positive clause and -3
for the negative.
This is where I have seen some models (Vader, SentiStrength, etc.) using POS to scale those base scores. In our example, perhaps however would be used to increase the magnitude of the negative clause's score by 10%, giving it a final score of -3.3
. Whether or not that makes sense depends on the use case, the data and probably the developers general experiences.
Summary
There are many uses for POS, you can imagine quite a few, whether to hand-tailor a sentiment model of just to produce more features. In any case, it is a process that extracts more information from the original raw text, applying langage models (like grammar!) that have been tested and are known to be robust for any official form of writing.
edited Sep 10 at 21:23
answered Sep 10 at 8:05


n1k31t4
4,0771217
4,0771217
You've missed why it's used for sentiment analysis. Not only does it detect to which noun phrase an adjective applies (or in more complex analysis, how two noun phrases are being compared), it also allows detecting the difference between e.g. the adjective "Nice" and the proper noun "Nice".
– OrangeDog
Sep 10 at 16:09
@OrangeDog - thanks for adding another use case. I made a similar point between Apple being an object noun (the fruit) and a proper noun (the company). There are many other use cases of POS, many of which can be found in the article I linked.
– n1k31t4
Sep 10 at 16:32
Your example doesn't express any sentiment, so it's an odd choice.
– OrangeDog
Sep 10 at 16:36
I will edit it to include more specific use cases.
– n1k31t4
Sep 10 at 16:38
OrangeDog & n1k31t4 guys thanks for your valuable suggestions.
– SRJ577
Sep 11 at 10:34
 |Â
show 2 more comments
You've missed why it's used for sentiment analysis. Not only does it detect to which noun phrase an adjective applies (or in more complex analysis, how two noun phrases are being compared), it also allows detecting the difference between e.g. the adjective "Nice" and the proper noun "Nice".
– OrangeDog
Sep 10 at 16:09
@OrangeDog - thanks for adding another use case. I made a similar point between Apple being an object noun (the fruit) and a proper noun (the company). There are many other use cases of POS, many of which can be found in the article I linked.
– n1k31t4
Sep 10 at 16:32
Your example doesn't express any sentiment, so it's an odd choice.
– OrangeDog
Sep 10 at 16:36
I will edit it to include more specific use cases.
– n1k31t4
Sep 10 at 16:38
OrangeDog & n1k31t4 guys thanks for your valuable suggestions.
– SRJ577
Sep 11 at 10:34
You've missed why it's used for sentiment analysis. Not only does it detect to which noun phrase an adjective applies (or in more complex analysis, how two noun phrases are being compared), it also allows detecting the difference between e.g. the adjective "Nice" and the proper noun "Nice".
– OrangeDog
Sep 10 at 16:09
You've missed why it's used for sentiment analysis. Not only does it detect to which noun phrase an adjective applies (or in more complex analysis, how two noun phrases are being compared), it also allows detecting the difference between e.g. the adjective "Nice" and the proper noun "Nice".
– OrangeDog
Sep 10 at 16:09
@OrangeDog - thanks for adding another use case. I made a similar point between Apple being an object noun (the fruit) and a proper noun (the company). There are many other use cases of POS, many of which can be found in the article I linked.
– n1k31t4
Sep 10 at 16:32
@OrangeDog - thanks for adding another use case. I made a similar point between Apple being an object noun (the fruit) and a proper noun (the company). There are many other use cases of POS, many of which can be found in the article I linked.
– n1k31t4
Sep 10 at 16:32
Your example doesn't express any sentiment, so it's an odd choice.
– OrangeDog
Sep 10 at 16:36
Your example doesn't express any sentiment, so it's an odd choice.
– OrangeDog
Sep 10 at 16:36
I will edit it to include more specific use cases.
– n1k31t4
Sep 10 at 16:38
I will edit it to include more specific use cases.
– n1k31t4
Sep 10 at 16:38
OrangeDog & n1k31t4 guys thanks for your valuable suggestions.
– SRJ577
Sep 11 at 10:34
OrangeDog & n1k31t4 guys thanks for your valuable suggestions.
– SRJ577
Sep 11 at 10:34
 |Â
show 2 more comments
up vote
3
down vote
Parts of Speech explains how a word is used in a sentence, i.e whether it is a verb, noun, adjective and so on.
In text processing, those POS (or word classes) are usually represented as their abbreviation and we call it tag
.
For example if we use nltk
, it uses The Penn Treebank tagset as a default.
https://www.ling.upenn.edu/courses/Fall_2003/ling001/penn_treebank_pos.html
import nltk
nltk.pos_tag(['I', 'like', 'playing', 'tennis'])
It will ouput:
[('I', 'PRP'), ('like', 'VBP'), ('playing', 'VBG'), ('tennis', 'NN')]
We can check nltk.help.upenn_tagset()
, and there we know that:
PRP : Personal Pronoun
VBP : Verb, non-3rd person singular present
VBG : Verb, gerund or present participle
NN : Noun, singular or mass
This answer does not mention any relationship between POS and sentiment analysis.
– n1k31t4
Sep 11 at 23:08
Both answers helped me, so I tried to mark both as correct answers, but it's not allowed here. It's my fault and sry for that.
– SRJ577
Sep 12 at 5:52
add a comment |Â
up vote
3
down vote
Parts of Speech explains how a word is used in a sentence, i.e whether it is a verb, noun, adjective and so on.
In text processing, those POS (or word classes) are usually represented as their abbreviation and we call it tag
.
For example if we use nltk
, it uses The Penn Treebank tagset as a default.
https://www.ling.upenn.edu/courses/Fall_2003/ling001/penn_treebank_pos.html
import nltk
nltk.pos_tag(['I', 'like', 'playing', 'tennis'])
It will ouput:
[('I', 'PRP'), ('like', 'VBP'), ('playing', 'VBG'), ('tennis', 'NN')]
We can check nltk.help.upenn_tagset()
, and there we know that:
PRP : Personal Pronoun
VBP : Verb, non-3rd person singular present
VBG : Verb, gerund or present participle
NN : Noun, singular or mass
This answer does not mention any relationship between POS and sentiment analysis.
– n1k31t4
Sep 11 at 23:08
Both answers helped me, so I tried to mark both as correct answers, but it's not allowed here. It's my fault and sry for that.
– SRJ577
Sep 12 at 5:52
add a comment |Â
up vote
3
down vote
up vote
3
down vote
Parts of Speech explains how a word is used in a sentence, i.e whether it is a verb, noun, adjective and so on.
In text processing, those POS (or word classes) are usually represented as their abbreviation and we call it tag
.
For example if we use nltk
, it uses The Penn Treebank tagset as a default.
https://www.ling.upenn.edu/courses/Fall_2003/ling001/penn_treebank_pos.html
import nltk
nltk.pos_tag(['I', 'like', 'playing', 'tennis'])
It will ouput:
[('I', 'PRP'), ('like', 'VBP'), ('playing', 'VBG'), ('tennis', 'NN')]
We can check nltk.help.upenn_tagset()
, and there we know that:
PRP : Personal Pronoun
VBP : Verb, non-3rd person singular present
VBG : Verb, gerund or present participle
NN : Noun, singular or mass
Parts of Speech explains how a word is used in a sentence, i.e whether it is a verb, noun, adjective and so on.
In text processing, those POS (or word classes) are usually represented as their abbreviation and we call it tag
.
For example if we use nltk
, it uses The Penn Treebank tagset as a default.
https://www.ling.upenn.edu/courses/Fall_2003/ling001/penn_treebank_pos.html
import nltk
nltk.pos_tag(['I', 'like', 'playing', 'tennis'])
It will ouput:
[('I', 'PRP'), ('like', 'VBP'), ('playing', 'VBG'), ('tennis', 'NN')]
We can check nltk.help.upenn_tagset()
, and there we know that:
PRP : Personal Pronoun
VBP : Verb, non-3rd person singular present
VBG : Verb, gerund or present participle
NN : Noun, singular or mass
answered Sep 10 at 8:30
bakka
694
694
This answer does not mention any relationship between POS and sentiment analysis.
– n1k31t4
Sep 11 at 23:08
Both answers helped me, so I tried to mark both as correct answers, but it's not allowed here. It's my fault and sry for that.
– SRJ577
Sep 12 at 5:52
add a comment |Â
This answer does not mention any relationship between POS and sentiment analysis.
– n1k31t4
Sep 11 at 23:08
Both answers helped me, so I tried to mark both as correct answers, but it's not allowed here. It's my fault and sry for that.
– SRJ577
Sep 12 at 5:52
This answer does not mention any relationship between POS and sentiment analysis.
– n1k31t4
Sep 11 at 23:08
This answer does not mention any relationship between POS and sentiment analysis.
– n1k31t4
Sep 11 at 23:08
Both answers helped me, so I tried to mark both as correct answers, but it's not allowed here. It's my fault and sry for that.
– SRJ577
Sep 12 at 5:52
Both answers helped me, so I tried to mark both as correct answers, but it's not allowed here. It's my fault and sry for that.
– SRJ577
Sep 12 at 5:52
add a comment |Â
up vote
0
down vote
POS can be used in multiple application in text analytics. The majority of the techniques in Text Analytics work on tokenisation and N grams( break down of sentence into words). In most of the case, semantics of the text is lost as sentences are break down into words and standalone words cannot express emotions and semantics as compare to group of words or sentences. So by tagging each word in the corpus to its parts of speech makes sometimes easy to get the context in which the word is used and ultimately used in analyzing the sentiments.
I tried Text Blob and NLTK package in Python for text analytics. Refer to the below link for more information on usage of these packages.
https://www.nltk.org/
https://pythonprogramming.net/tokenizing-words-sentences-nltk-tutorial/
https://textblob.readthedocs.io/en/dev/quickstart.html
https://textblob.readthedocs.io/en/dev/
add a comment |Â
up vote
0
down vote
POS can be used in multiple application in text analytics. The majority of the techniques in Text Analytics work on tokenisation and N grams( break down of sentence into words). In most of the case, semantics of the text is lost as sentences are break down into words and standalone words cannot express emotions and semantics as compare to group of words or sentences. So by tagging each word in the corpus to its parts of speech makes sometimes easy to get the context in which the word is used and ultimately used in analyzing the sentiments.
I tried Text Blob and NLTK package in Python for text analytics. Refer to the below link for more information on usage of these packages.
https://www.nltk.org/
https://pythonprogramming.net/tokenizing-words-sentences-nltk-tutorial/
https://textblob.readthedocs.io/en/dev/quickstart.html
https://textblob.readthedocs.io/en/dev/
add a comment |Â
up vote
0
down vote
up vote
0
down vote
POS can be used in multiple application in text analytics. The majority of the techniques in Text Analytics work on tokenisation and N grams( break down of sentence into words). In most of the case, semantics of the text is lost as sentences are break down into words and standalone words cannot express emotions and semantics as compare to group of words or sentences. So by tagging each word in the corpus to its parts of speech makes sometimes easy to get the context in which the word is used and ultimately used in analyzing the sentiments.
I tried Text Blob and NLTK package in Python for text analytics. Refer to the below link for more information on usage of these packages.
https://www.nltk.org/
https://pythonprogramming.net/tokenizing-words-sentences-nltk-tutorial/
https://textblob.readthedocs.io/en/dev/quickstart.html
https://textblob.readthedocs.io/en/dev/
POS can be used in multiple application in text analytics. The majority of the techniques in Text Analytics work on tokenisation and N grams( break down of sentence into words). In most of the case, semantics of the text is lost as sentences are break down into words and standalone words cannot express emotions and semantics as compare to group of words or sentences. So by tagging each word in the corpus to its parts of speech makes sometimes easy to get the context in which the word is used and ultimately used in analyzing the sentiments.
I tried Text Blob and NLTK package in Python for text analytics. Refer to the below link for more information on usage of these packages.
https://www.nltk.org/
https://pythonprogramming.net/tokenizing-words-sentences-nltk-tutorial/
https://textblob.readthedocs.io/en/dev/quickstart.html
https://textblob.readthedocs.io/en/dev/
answered Sep 18 at 13:31
Nirav Gandhi
614
614
add a comment |Â
add a comment |Â
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f38027%2fwhat-is-parts-of-speech-technique-in-sentiment-analysis%23new-answer', 'question_page');
);
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password