|
Reading Tea Leaves: How Humans Interpret Topic Models
|
Reading Tea Leaves: How Humans Interpret Topic Models
Probabilistic topic models are a commonly used tool for analyzing text data, where the latent topic representation is used to perform qualitative evaluation of models and guide corpus exploration. Practitioners typically assume that the latent space is semantically meaningful, but this important property has lacked a quantitative evaluation. In this paper, we present new quantitative methods for measuring semantic meaning in inferred topics. We back these measures with large-scale user studies, showing that they capture aspects of the model that are undetected by measures of model quality based on held-out likelihood. Surprisingly, topic models which perform better on held-out likelihood may actually infer less semantically meaningful topics.
Video Length: 0
Date Found: October 11, 2010
Date Produced: January 19, 2010
View Count: 0
|
|
|
|
|
I got punched by an old guy, for farting near his wife. Read MoreComic book creator Stan Lee talks the future of the medium in the digital age. Panelists Zachary... Read MoreThe U.S. launch of Spotify is still on music lovers' minds. Join Zachary Levi, from NBC’s... Read MoreTuesday: Rupert Murdoch testifies before Parliament on the hacking scandal that brought down "News... Read MoreAfter a long slump, the home construction industry may be showing signs of life. But as Bill... Read More | 1 2 3 4 5 |
|
|
|