In ACL-08, Jeff Mitchell and Mirella Lapata presented ín their paper "Vector-based Models of Semantic Composition" a careful comparison of different approaches for representing the meaning of phrases and sentences in vector space. Their work was motivated by the fact that most studies of vector-based representation of meaning had been concentrating on separate words only. A bag-of-words approach is useful in finding topics or meaning components using methods like LSA or WordICA.
The word-level approach does not take into account word order which naturally limits the applicability of these methods. In particular, the limitations has concerned propositional meaning. The logic-based approaches, on the other hand, have limitations on how graded phenomena and contextuality can be modeled. A classical example is Montague semantics that is formally attractive but quite far from being a realistic model of meaning due to its simplicity. Therefore, it is important to development models that take the vector-space representations to the level of sentences. The usefulness of these representations is explained carefully by Peter Gärdenfors in his book on conceptual spaces.
Mitchell and Lapata considered a wide range of composition models which they evaluated empirically on a sentence similarity task. Their main conclusion was that multiplicative models are better than additive alternatives when the computational models are compared with human judgments. Classical works in this area include Smolensky's article in 1990 in which he proposed the use of tensor products as a means of variable binding and representing symbolic structures in a vector-based framework. Since 2008, many researchers have continued work in this area including Erk and Padó (2008), Turney and Pantel (2010), Baroni and Zamparelli (2010), Grefenstette and Sadrzadeh (2011), and Clarke (2012).
No comments:
Post a Comment