Would a computer which understood the denotations of all utterances be able to understand intentions and tone?
The first problem with this issue is that there is some question as to whether human beings can understand intentions completely. Most people when they speak may have a mixture of intentions which they could not fully articulate. Much of the disciplines of psychology and even evolutionary theory and Marxist sociology are built on the assumption that we have subconscious motivations we cannot fully understand. Tone of voice is somewhat less complicated from a philosophical point of view, although it would take massive databases of utterances and processing power. Also philosophically problematic is the question of understanding. If I ask a waiter to bring me a latte, and the waiter brings a scone, I can claim to be misunderstood and if the waiter brings me a latter, I can claim to be understood -- on a simple level. But computer performing the same actions as the waiter be assumed to possess similar internal states.