US Secret Service seeks Twitter sarcasm detector
And false positives?
Does it mean type I error or type II error?
Big BrotherĀ“s watching you
If the sarcasm detector does work, it should be turned into a Firefox add-on and heavily marketed in Germany and Austria!
Well, I would buy it.
- I think that sarcasm is more difficult to be detected than lies or lice.
- I think that sarcasm is more difficult to detect than lies or lice.
- I think that it is more difficult to detect sarcasm than to detect lies or lice.
Which sentence is more natural?
P.S.
I havenāt seen a lice detector yet.
P.P.S.
āSarcasm is difficult to detect.ā This might be the basic statement. I notice that I tend to express this using āto be detectedā, as in āSarcasm is hard to be detected.ā
Good question, Yutaka! Who knows exactly what they mean by āfalse positivesā in this context? (Thatās a rhetorical question.) Come to think of it, Iāll bet Edward Snowden could actually answer that question. Maybe we could get Brian Williams to ask Snowden that question in his next interview. In any event, itās pretty creepy.
I hope they donāt false positive my sarcasm and send the FBI to my front door. Oh, wait a minuteā¦ I donāt transfer all my thoughts to the world in 40 characters or less using Twitter in the first place.
@Yutaka - As a non-native speaker I am hesitant to comment :))
As @Jingle says, 2 and 3 are fine.
No. 2 is the one Iād use most often, I would even drop the āthatā occasionally.
No.3ās structure has a biblical ring to it āit is easier for a camel to go through the eye of a needle than for a rich person to enter the Kingdom of Heavenā - but is clearly correct. And here, too, one could drop the āthatā.
P.S. I forgot to add a false positive, hope that doesnāt inconvenience the FBI too much. Sorry.
Hello. Here is my interpretation of what the article means.
The NSA, FBI, and CIA have been looking at social media for the purpose of trying to find illegal or terrorist activity so they can make arrests and prosecute cases against these people. The thing is that sometimes on social media, we make jokes using words which are āflagsā for alerting these agencies of the possibility of something bad guys would be likely to say.
For example, if my fictional friend John says he is so mad at the Post Office that he intends to blow up the building, then using the words ābombā āblow upā and āPost Officeā in the same sentence of this post could possibly flag that message as being a from terrorist individual or group. The software being used by these agencies works 24-7 to search social media (like a spider, the same way Google works) for such word combinations, and it does nothing until it finds such examples. When it does find such examples, it will flag it for review by a human.
The difficult thing about this is that many people make jokes with red-flag keywords. So the number of jokes that get flagged surely would be in the millions. Because such a high percentage of messages of this type are getting flagged, it is very costly for a human to review so many. If they can implement a software program that can detect jokes or sarcasm, then it would be filtering out the false positives that really do not need to be reviewed by a human, and thereby save millions of dollars by reducing the workload.
Also, this software would not be something that would be available to the public as it would have a classified status until it is no longer in use, and even then would not likely be declassified until it is sufficiently aged to where the information is useless.