Why should sarcasm be detected?

US Secret Service seeks Twitter sarcasm detector

And false positives?
Does it mean type I error or type II error?

Big BrotherĀ“s watching you

1 Like

If the sarcasm detector does work, it should be turned into a Firefox add-on and heavily marketed in Germany and Austria!

Well, I would buy it.

  1. I think that sarcasm is more difficult to be detected than lies or lice.
  2. I think that sarcasm is more difficult to detect than lies or lice.
  3. I think that it is more difficult to detect sarcasm than to detect lies or lice.

Which sentence is more natural?

P.S.
I havenā€™t seen a lice detector yet.

P.P.S.
ā€˜Sarcasm is difficult to detect.ā€™ This might be the basic statement. I notice that I tend to express this using ā€˜to be detectedā€™, as in ā€˜Sarcasm is hard to be detected.ā€™

Good question, Yutaka! Who knows exactly what they mean by ā€œfalse positivesā€ in this context? (Thatā€™s a rhetorical question.) Come to think of it, Iā€™ll bet Edward Snowden could actually answer that question. Maybe we could get Brian Williams to ask Snowden that question in his next interview. In any event, itā€™s pretty creepy.
I hope they donā€™t false positive my sarcasm and send the FBI to my front door. Oh, wait a minuteā€¦ I donā€™t transfer all my thoughts to the world in 40 characters or less using Twitter in the first place.

@yutaka - Numbers 2 and 3 are fine.

@Yutaka - As a non-native speaker I am hesitant to comment :))

As @Jingle says, 2 and 3 are fine.

No. 2 is the one Iā€™d use most often, I would even drop the ā€˜thatā€™ occasionally.
No.3ā€™s structure has a biblical ring to it ā€œit is easier for a camel to go through the eye of a needle than for a rich person to enter the Kingdom of Heavenā€ - but is clearly correct. And here, too, one could drop the ā€˜thatā€™.

1 Like

P.S. I forgot to add a false positive, hope that doesnā€™t inconvenience the FBI too much. Sorry.

Hello. Here is my interpretation of what the article means.

The NSA, FBI, and CIA have been looking at social media for the purpose of trying to find illegal or terrorist activity so they can make arrests and prosecute cases against these people. The thing is that sometimes on social media, we make jokes using words which are ā€œflagsā€ for alerting these agencies of the possibility of something bad guys would be likely to say.

For example, if my fictional friend John says he is so mad at the Post Office that he intends to blow up the building, then using the words ā€œbombā€ ā€œblow upā€ and ā€œPost Officeā€ in the same sentence of this post could possibly flag that message as being a from terrorist individual or group. The software being used by these agencies works 24-7 to search social media (like a spider, the same way Google works) for such word combinations, and it does nothing until it finds such examples. When it does find such examples, it will flag it for review by a human.

The difficult thing about this is that many people make jokes with red-flag keywords. So the number of jokes that get flagged surely would be in the millions. Because such a high percentage of messages of this type are getting flagged, it is very costly for a human to review so many. If they can implement a software program that can detect jokes or sarcasm, then it would be filtering out the false positives that really do not need to be reviewed by a human, and thereby save millions of dollars by reducing the workload.

Also, this software would not be something that would be available to the public as it would have a classified status until it is no longer in use, and even then would not likely be declassified until it is sufficiently aged to where the information is useless.

1 Like