[ Whoa! Number 50! Maybe a party to celebrate? No, way too soon. We’ll wait for 100. — ed. ]
How about this?
In a Mood? Call Center Agents Can Tell.
by Natasha Singer
The New York Times
Sunday, October 13, 2013
So there’s this multi-layered experience whenever we communicate with other people. We get the sensory experience of their presence: their smells, their subolifactory pheromones, sometimes their touch, perhaps their shadows. We get the sound waves of their voices, their coughs, their throat-clearings. We get each person’s individualized, ever-changing visual impact: the way the light plays off his or her skin, hair, clothes. And we get an ineffable “vibe” too — the combination of microexpressions, movements of voluntary and involuntary muscle groups, tonality in voices, cascading associations in our minds based on the words they choose, various body postures as they sit, stand, walk — all that and more added together.
This New York Times article describes efforts for creating software to analyze the emotional undercurrents in a person’s conversation. The author is skeptical. But I believe that Ms. Singer is skeptical about the wrong thing. She is not convinced that a software program can accurately detect the emotions and shifting intentions and hesitations in someone’s speech.
I think, quite to the contrary, that an advanced program probably could do this. I think the proper place to be skeptical is, rather, in regards to thinking that this information, taken in isolation, would be definitive of anything.
It would be like analyzing the color tones of a Rembrandt painting and therefore determining if the image conveys a person of high or low standing. The actual emotional and intentional experience of another person is always (again that ugly but usefully descriptive word) multi-layered.
I would guess that we would need to include other senses than merely auditory to reach a conclusion of any value as to what is going on in another person’s experience.
Could this multi-layered software program someday be developed? Could a computer outstrip live human beings (except maybe Proust) in reading the emotions and intentions of a subject? I think so! But it would require many inputs — and the physical presence of the subject.
But what is all this?– What is this riff about anyway? An article? Or an image?
Why did you pair the image above with a random New York Times article?
I will leave that mystery unsolved and pass on to the image…
We see a sketch of a woman — or a long-haired, male rocker? — on grainy paper, or perhaps some kind of background (a wall? a toilet stall?)
The woman has downcast, fish-shaped eyes, with heavy lids. She appears to have two tongues — or maybe snake fangs? No, I think they are tongues.
I don’t know what she represents except someone in a state of disgust, or menace. The cleft chin is an odd note — makes me think rocker and not woman… Her/his lack of aggression is another odd note. She/he has a scar on the left cheek — presumably from some previous altercation.
We sense that she inhabits a cruel world, but that she may dole out cruelties herself in order to survive. What this has to do with the New York Times article and emotional truths and don’t-judge-a-book-by-its-cover admonitions, I’m not sure. Except that I will say I would do better if I could hear her talk, that’s for sure.