Topic models allow us to cluster similar documents in a corpus together.
Don’t we already have tools for that?
Yes! Dictionaries and supervised learning.
So what do topic models add?
Data: UK House of Commons’ debates (PMQs)
Sample/feature selection decisions
Topic models offer an automated procedure for discovering the main “themes” in an unstructured corpus
They require no prior information, training set, or labelling of texts before estimation
They allow us to automatically organise, understand, and summarise large archives of text data.
Latent Dirichlet Allocation (LDA) is the most common approach (Blei et al., 2003), and one that underpins more complex models
Topic models are an example of mixture models:
Last week, we introduced the idea of a probabilistic language model
A language model is represented by a probability distribution over words in a vocabulary
The Naive Bayes text classification model is one example of a generative language model where
Topic models are also language models
A “topic” is a probability distribution over a fixed word vocabulary.
Consider a vocabulary: gene, dna, genetic, data, number, computer
When speaking about genetics, you will:
When speaking about computation, you will:
Topic | gene | dna | genetic | data | number | computer |
---|---|---|---|---|---|---|
Genetics | 0.4 | 0.25 | 0.3 | 0.02 | 0.02 | 0.01 |
Computation | 0.02 | 0.01 | 0.02 | 0.3 | 0.4 | 0.25 |
Note that no word has probability of exactly 0 under either topic.
In a topic model, each document is described as being composed of a mixture of corpus-wide topics
For each document, we find the topic proportions that maximize the probability that we would observe the words in that particular document
Imagine we have two documents with the following word counts
Topic | gene | dna | genetic | data | number | computer |
---|---|---|---|---|---|---|
Genetics | 0.4 | 0.25 | 0.3 | 0.02 | 0.02 | 0.01 |
Computation | 0.02 | 0.01 | 0.02 | 0.3 | 0.4 | 0.25 |
What is the probability of observing Document A’s word counts under the “Genetics” topic?
\[\begin{eqnarray} P(W_A|\mu_{\text{Genetics}}) &=& \frac{M_i!}{\prod_{j=1}^JW_{A,j}!}\prod_{j=1}^J\mu_{\text{Genetics},j}^{W_{A,j}} \\ &=& 0.0000000798336 \end{eqnarray}\]
What is the probability of observing Document A’s word counts under the “Computation” topic?
\[\begin{eqnarray} P(W_A|\mu_{\text{Computation}}) &=& \frac{M_i!}{\prod_{j=1}^JW_{A,j}!}\prod_{j=1}^J\mu_{\text{Computation},j}^{W_{A,j}} \\ &=& 0.0000000287401 \end{eqnarray}\]
What is the probability of observing Document A’s word counts under a equal mixture of the both topics?
\[\begin{eqnarray} P(W_A|\mu_{\text{Comp+Genet}}) &=& \frac{M_i!}{\prod_{j=1}^JW_{A,j}!}\prod_{j=1}^J\mu_{\text{Comp+Genet},j}^{W_{A,j}} \\ &=& 0.001210891 \end{eqnarray}\]
What is the probability of observing Document B’s word counts under the “Genetics” topic?
\[\begin{eqnarray} P(W_B|\mu_{\text{Genetics}}) &=& \frac{M_i!}{\prod_{j=1}^JW_{B,j}!}\prod_{j=1}^J\mu_{\text{Genetics},j}^{W_{B,j}} \\ &=& 0.0000112266 \end{eqnarray}\]
What is the probability of observing Document B’s word counts under the “Computation” topic?
\[\begin{eqnarray} P(W_B|\mu_{\text{Computation}}) &=& \frac{M_i!}{\prod_{j=1}^JW_{B,j}!}\prod_{j=1}^J\mu_{\text{Computation},j}^{W_{B,j}} \\ &=& 0.00000000004790016 \end{eqnarray}\]
What is the probability of observing Document B’s word counts under a equal mixture of the both topics?
\[\begin{eqnarray} P(W_B|\mu_{\text{Comp+Genet}}) &=& \frac{M_i!}{\prod_{j=1}^JW_{B,j}!}\prod_{j=1}^J\mu_{\text{Comp+Genet},j}^{W_{B,j}} \\ &=& 0.0007378866 \end{eqnarray}\]
What is the probability of observing Document B’s word counts under a 60-40 mixture of the both topics?
\[\begin{eqnarray} P(W_B|\mu_{\text{Comp+Genet}}) &=& \frac{M_i!}{\prod_{j=1}^JW_{B,j}!}\prod_{j=1}^J\mu_{\text{Comp+Genet},j}^{W_{B,j}} \\ &=& 0.001262625 \end{eqnarray}\]
Implication: Our documents may be better described in terms of mixtures of different topics than by one topic alone.
A topic model simultaneously estimates two sets of probabilities
The probability of observing each word for each topic
The probability of observing each topic in each document
These quantities can then be used to organise documents by topic, assess how topics vary across documents, etc.
LDA is a probabilistic language model.
We assume that each document \(d\) in the corpus is “generated” as follows:
However, we only observe documents!
The goal of LDA is to estimate hidden parameters (\(\beta\) and \(\theta\)) starting from \(w\).
The LDA model is a Bayesian mixture model for discrete data which describes how the documents in a dataset were created
The number of topics, \(K\), is selected by the researcher
Each of the \(K\) topics is a probability distribution over a fixed vocabulary of \(N\) words
Each of the \(D\) documents is a probability distribution over the \(K\) topics
Each word in each document is drawn from the topic-specific probability distribution over words
A probability distribution is a function that gives the probabilities of the occurrence of different possible outcomes for a random variable
Probability distributions are defined by their parameters
Different parameter values change the distribution’s shape and describe the probabilities of the different events
The notation “\(\sim\)” means to “draw” from the distribution
There are two key distributions that we need to know about to understand topic models: the Multinomial and the Dirichlet distributions
The multinomial distribution is a probability distribution describing the results of a random variable that can take on one of K possible categories
The multinomial distribution depicted has probabilities \([0.2, 0.7, 0.1]\)
A draw (of size one) from a multinomial distribution returns one of the categories of the distribution
A draw of a larger size from a multinomial distribution returns several categories of the distribution in proportion to their probabilities
We have seen this before! Naive Bayes uses the multinomial distribution to describe the probability of observing words in different categories of documents.
The Dirichlet distribution is a distribution over the simplex, i.e., positive vectors that sum to one
A draw from a dirichlet distribution returns a vector of positive numbers that sum to one
In other words, we can think of draws from a Dirichlet distribution being themselves multinomial distributions
The parameter \(\alpha\) controls the sparsity of the draws from the Dirichlet distribution.
p = [0.21,0.5,0.28]
p = [0.22,0.6,0.18]
p = [0.37,0.2,0.43]
LDA assumes a generative process for documents:
Each topic is a probability distribution over words
For each document, draw a probability distribution over topics
For each word in each document
Draw one of \(K\) topics from the distribution over topics \(\theta_d\)
Given \(z_i\), draw one of \(N\) words from the distribution over words \(\beta_k\)
Note: \(\eta\) and \(\alpha\) govern the of the draws from the dirichlet. As they \(\rightarrow 0\), the multinomials become more sparse.
From a collection of documents, infer
Then use estimates of these parameters to perform the task at hand \(\rightarrow\) information retrieval, document similarity, exploration, and others.
Assuming the documents have been generated in such a way, in return makes it possible to back out the shares of topics within documents and the share of words within topics
Estimation of the LDA model is done in a Bayesian framework
Our \(Dir(\alpha)\) and \(Dir(\eta)\) are the prior distributions of the \(\theta_d\) and \(\beta_k\)
We use Bayes’ rule to update these prior distributions to obtain a posterior distribution for each \(\theta_d\) and \(\beta_k\)
The means of these posterior distributions are the outputs of statistical packages and which we use to investigate the \(\theta_d\) and \(\beta_k\)
Estimation is performed using either collapsed Gibbs sampling or variational methods
Fortunately, for us these are easily implemented in R
LDA trades off two goals.
These goals are at odds.
Putting a document in a single topic makes (2) hard: All of its words must have probability under that topic.
Putting very few words in each topic makes (1) hard: To cover a document’s words, it must assign many topics to it.
Trading off these goals finds groups of tightly co-occurring words
Imagine we have \(D = 1000\) documents, \(J = 10,000\) words, and \(K = 3\) topics.
The key outputs of the topic model are the \(\beta\) and \(\theta\) matrices:
\[\begin{equation} \theta = \underbrace{\begin{pmatrix} \theta_{1,1} & \theta_{1,2} & \theta_{1,3}\\ \theta_{2,1} & \theta_{2,2} & \theta_{2,3}\\ ... & ... & ...\\ \theta_{D,1} & \theta_{D,2} & \theta_{D,3}\\ \end{pmatrix}}_{D\times K} = \underbrace{\begin{pmatrix} 0.7 & 0.2 & 0.1\\ 0.1 & 0.8 & 0.1\\ ... & ... & ...\\ 0.3 & 0.3 & 0.4\\ \end{pmatrix}}_{1000 \times 3} \end{equation}\]
\[\begin{equation} \beta = \underbrace{\begin{pmatrix} \beta_{1,1} & \beta_{1,2} & ... & \beta_{1,J}\\ \beta_{2,1} & \beta_{2,2} & ... & \beta_{2,J}\\ \beta_{3,1} & \beta_{3,2} & ... & \beta_{3,J}\\ \end{pmatrix}}_{K\times J} = \underbrace{\begin{pmatrix} 0.04 & 0.0001 & ... & 0.003\\ 0.0004 & 0.001 & ... & 0.00005\\ 0.002 & 0.0003 & ... & 0.0008\\ \end{pmatrix}}_{3 \times 10,000} \end{equation}\]
Data: UK House of Commons’ debates (PMQs)
Rows: 27,885
Columns: 4
$ name <chr> "Ian Bruce", "Tony Blair", "Denis MacShane", "Tony Blair"…
$ party <chr> "Conservative", "Labour", "Labour", "Labour", "Liberal De…
$ constituency <chr> "South Dorset", "Sedgefield", "Rotherham", "Sedgefield", …
$ body <chr> "In a written answer, the Treasury has just it made clear…
topicmodels
packagelibrary(quanteda)
library(topicmodels)
## Create corpus
pmq_corpus <- pmq %>%
corpus(text_field = "body")
pmq_dfm <- pmq_corpus %>%
tokens(remove_punct = TRUE) %>%
dfm() %>%
dfm_remove(stopwords("en")) %>%
dfm_wordstem() %>%
dfm_trim(min_termfreq = 5)
## Convert for usage in 'topicmodels' package
pmq_tm_dfm <- pmq_dfm %>%
convert(to = 'topicmodels')
We will make use of the following score to visualise the posterior topics:
\[ \text{term-score}_{k,v} = \hat{\beta}_{k,v}\log\left(\frac{\hat{\beta}_{k,v}}{(\prod_{j=1}^{K}\hat{\beta}_{j,v})^{\frac{1}{K}}}\right) \]
This formulation is akin to the TFIDF term score
# Extract estimated betas
topics <- tidy(ldaOut, matrix = "beta")
# Calculate the term scores
top_terms <- topics %>%
group_by(term) %>%
mutate(beta_k = prod(beta)^(1/20)) %>%
ungroup() %>%
mutate(term_score = beta*log(beta/(beta_k))) %>%
group_by(topic) %>%
slice_max(term_score, n = 10)
# Extract the terms with the largest scores per topic
top_terms$term[top_terms$topic==3]
[1] "economi" "econom" "interest" "plan" "rate" "countri"
[7] "deficit" "s" "growth" "debt"
[1] "forc" "iraq" "defenc" "british" "afghanistan"
[6] "troop" "secur" "arm" "war" "weapon"
report.inquiri.review.publish.committe.commiss.inform (29%)
In relation to all those issues, the Intelligence and Security Committee is at full liberty to go through all the Joint Intelligence Committee assessments and produce a report on them. Because of the importance of the issue, it is only right that a report be published so that people can make a judgment on it. However, the claims that have been made are simply false. In particular, the claim that the readiness of Saddam to use weapons within 45 minutes of an order to use them was a point inserted in the dossier at the behest of No. 10 is completely and totally untrue. Furthermore, the allegation that the 45-minute claim provoked disquiet among the intelligence community, which disagreed with its inclusion in the dossier I have discussed it, as I said, with the chairman of the Joint Intelligence Committee is also completely and totally untrue. Instead of hearing from one or many anonymous sources, I suggest that if people have any evidence, they actually produce it.
Other topics in document:
said.say.wrong.chancellor.listen.think.just (7%)
howev.cours.problem.reason.way.must.respect (6%)
act.protect.case.law.countri.use.peopl (5%)
busi.bank.scheme.small.help.financi.govern (30%)
I am grateful to the right hon. Gentleman for giving me an opportunity to say, in addition, what we are doing to help small businesses. The key issues for small and medium-sized enterprises are cash flow, and, to some extent, access to finance, as I have just said. They need to be helped through this critical period. Late payment problems, which have intensified, with all firms on average lengthening the time it takes to pay their suppliers, including SMEs, go beyond agreed terms. The Government can ease the situation, and we will help cash flow through prompt payment. The Government have already agreed to move their procurement rules from payments within 30 days to a commitment to pay as soon as possible. In the current climate, we need to go further, with a harder target. We will therefore aim to make SME payments within 10 days. The Government will pick up the cost of that, but it is a small price for greatly increasing cash flow associated with 8 billion of contracts for SMEs. As I announced last weekend, we propose that the European Investment Bank increase its loans worth up to 4 billion to United Kingdom banks for use by small and medium-sized enterprises. We are now pressing for further additional funding to be advanced, and for UK banks to be able to ensure that they take up the full funding available for SMEs. The Government will, of course, review the impact of any regulatory measures already agreed, but let me make it clear that we are doing, and will do, everything we can to assist SMEs throughout this period. Our restructuring of the banks is designed to ensure that, although it is difficult to achieve, credit lines can remain open to SMEs on a commercial basis. We will not accept that banks should cull their credit lines to eliminate their own exposure to risk, so we will do everything we can to help the 4 million small businesses of this country.
Other topics in document:
can.assur.give.ensur.support.possibl.work (9%)
countri.part.chang.world.import.need.play (6%)
money.spend.million.invest.billion.extra.put (5%)
local.constitu.council.area.author.communiti.region (26%)
In March we introduced a new local green space designation to protect green spaces not just for great crested newts and landscape painters but for urban and suburban communities such as Leckhampton, Warden Hill and Whaddon in my constituency. Can the Prime Minister reassure local councils that they can and should use this new designation and that it has not been undermined by any recent pronouncements?
Other topics in document:
act.protect.case.law.countri.use.peopl (5%)
can.assur.give.ensur.support.possibl.work (4%)
minist.prime.deputi.confirm.failur.realiti.watch (4%)
Advantages
Disadvantages
Policy problem: Performance in standardised tests is strongly correlated with income, creating the potential for bias against lower-income students.
Research question: Are other components of admission files – such as written essays – less correlated with income than SATs?
Research Design:
Conclusions
Topical content strongly predicts household income
Topical content strongly predicts SAT scores
Even conditional on income, topics predict SAT scores
“Our results strongly suggest that the imprint of social class will be found in even the fuzziest of application materials.”
LDA can be embedded in more complicated models, embodying further intuitions about the structure of the texts.
The data generating distribution can be changed. We can apply mixed-membership assumptions to many kinds of data.
The posterior can be used in creative ways.
Correlated Topic Model (CTM)
Dynamic Topic Model (DTM)
Structural Topic Model (STM)
Typically, when estimating topic models we are interested in how some covariate is associated with the prevalence of topic usage (Gender, date, political party, etc)
The Structural Topic Model (STM) allows for the inclusion of arbitrary covariates of interest into the generative model
Topic prevalence is allowed to vary according to the covariates \(X\)
Topical content can also vary according to the covariates \(Y\)
Topic prevalence model:
Topical content model:
Specify a linear model with:
\[ \theta_{dk} = \alpha + \gamma_{1k}*\text{labour}_{d(i)} \]
Topic 1 Top Words:
Highest Prob: minist, prime, govern, s, tell, confirm, ask
FREX: prime, minist, confirm, failur, paymast, lack, embarrass
Lift: protectionist, roadshow, harrison, booki, arrog, googl, pembrokeshir
Score: prime, minist, s, confirm, protectionist, govern, tell
Topic 2 Top Words:
Highest Prob: chang, review, made, target, fund, meet, depart
FREX: climat, flood, review, chang, environ, emiss, carbon
Lift: 2050, consequenti, parrett, dredg, climat, greenhous, barnett
Score: chang, flood, climat, review, target, environ, emiss
Topic 3 Top Words:
Highest Prob: servic, health, nhs, care, hospit, nation, wait
FREX: cancer, patient, nhs, health, hospit, gp, doctor
Lift: horton, scotsman, wellb, clinician, herceptin, polyclin, healthcar
Score: health, nhs, servic, hospit, cancer, patient, nurs
Topic 4 Top Words:
Highest Prob: decis, vote, made, parti, elect, propos, debat
FREX: vote, liber, debat, scottish, decis, recommend, scotland
Lift: calman, gould, imc, wakeham, in-built, ipsa, jenkin
Score: vote, democrat, decis, parti, debat, liber, elect
Topic 5 Top Words:
Highest Prob: secretari, said, state, last, week, inquiri, report
FREX: deputi, warn, resign, inquiri, alleg, statement, servant
Lift: donnel, gus, revolutionari, sixsmith, column, bend, coulson
Score: secretari, deputi, inquiri, committe, said, state, alleg
Topic 6 Top Words:
Highest Prob: northern, ireland, meet, agreement, process, talk, peopl
FREX: ireland, northern, agreement, ira, down, sinn, decommiss
Lift: clamour, haass, tibetan, dalai, lama, tibet, presbyterian
Score: ireland, northern, agreement, peac, meet, process, down
Topic 7 Top Words:
Highest Prob: hous, home, build, need, common, plan, social
FREX: rent, hous, afford, properti, buy, lesson, site
Lift: fairi, rung, rent, greenfield, owner-occupi, bed-and-breakfast, tenant
Score: hous, home, build, rent, afford, common, fairi
Topic 8 Top Words:
Highest Prob: year, offic, polic, last, month, ago, two
FREX: four, three, ago, promis, month, five, six
Lift: templ, eye-catch, dixon, folder, paperwork, cutback, mug
Score: year, polic, crime, offic, figur, promis, month
Topic 9 Top Words:
Highest Prob: countri, world, peopl, forc, troop, afghanistan, aid
FREX: africa, taliban, zimbabw, aid, afghan, troop, g8
Lift: mbeki, madrassah, mandela, shi'a, thabo, non-agricultur, zimbabwean
Score: afghanistan, troop, iraq, iraqi, aid, afghan, africa
Topic 10 Top Words:
Highest Prob: bank, busi, energi, price, action, financi, take
FREX: price, bank, lend, energi, market, regul, financi
Lift: contagion, lender, recapitalis, depositor, okay, payday, ofgem
Score: bank, energi, price, busi, market, regul, okay
Topic 11 Top Words:
Highest Prob: school, educ, children, univers, parent, student, teacher
FREX: pupil, student, school, teacher, educ, fee, univers
Lift: 11-plus, grant-maintain, meal, numeraci, underachiev, learner, per-pupil
Score: school, educ, children, univers, teacher, student, pupil
Topic 12 Top Words:
Highest Prob: hon, right, friend, member, agre, may, mr
FREX: member, friend, right, hon, york, witney, richmond
Lift: dorri, dewar, cowdenbeath, kirkcaldi, nadin, hain, neath
Score: friend, hon, right, member, mr, dorri, agre
Topic 13 Top Words:
Highest Prob: per, cent, 20, 10, 50, increas, 100
FREX: cent, per, 50, 15, 20, 60, 40
Lift: unrealist, slaughtermen, ppp, cent, outbreak, per, maff
Score: per, cent, 20, 50, unrealist, 15, billion
Topic 14 Top Words:
Highest Prob: mr, money, word, taxpay, speaker, public, much
FREX: speaker, mail, strike, taxpay, gold, valu, blair
Lift: davo, measl, jab, trussel, brightsid, mail, spiv
Score: mr, speaker, taxpay, davo, word, strike, mail
Topic 15 Top Words:
Highest Prob: number, increas, result, peopl, train, year, addit
FREX: number, train, overal, recruit, 1997, equip, increas
Lift: stubborn, ta, midwiferi, largest-ev, 180,000, dentist, improvis
Score: number, increas, train, invest, 1997, equip, defenc
Topic 16 Top Words:
Highest Prob: pension, benefit, peopl, help, work, million, poverti
FREX: disabl, pension, post, poverti, benefit, payment, retir
Lift: adair, eyesight, off-peak, sub-post, sub-postmast, over-75, concessionari
Score: pension, benefit, disabl, post, poverti, child, welfar
Topic 17 Top Words:
Highest Prob: law, act, legisl, crime, prison, peopl, measur
FREX: prison, asylum, crimin, releas, deport, offenc, law
Lift: conduc, investigatori, porn, indetermin, parol, pre-releas, deport
Score: prison, crime, crimin, sentenc, law, asylum, drug
Topic 18 Top Words:
Highest Prob: conserv, govern, parti, spend, polici, money, gentleman
FREX: conserv, spend, oppos, tori, parti, previous, polici
Lift: snooper, attle, bawl, saatchi, family-friend, tori, chef
Score: conserv, spend, parti, oppos, money, cut, billion
Topic 19 Top Words:
Highest Prob: european, union, britain, europ, british, countri, rule
FREX: european, europ, treati, currenc, eu, union, constitut
Lift: overtaken, super-st, tidying-up, super-pow, lafontain, isc, lisbon
Score: european, union, europ, referendum, treati, constitut, britain
Topic 20 Top Words:
Highest Prob: unit, kingdom, iraq, state, nation, weapon, secur
FREX: palestinian, weapon, resolut, israel, destruct, kingdom, mass
Lift: 1441, palestinian, two-stat, chess, hama, jenin, quartet
Score: unit, un, iraq, weapon, saddam, kingdom, palestinian
Topic 21 Top Words:
Highest Prob: constitu, concern, awar, can, suffer, case, assur
FREX: mother, miner, mrs, compens, suffer, aircraft, mine
Lift: manston, tebbutt, tyrel, asbestosi, byron, ex-min, norburi
Score: constitu, compens, suffer, death, awar, tebbutt, safeti
Topic 22 Top Words:
Highest Prob: join, famili, tribut, pay, express, live, serv
FREX: condol, sympathi, regiment, tribut, sacrific, veteran, servicemen
Lift: aaron, chant, guardsman, gurung, khabra, mercian, spitfir
Score: tribut, condol, join, afghanistan, famili, express, sympathi
Topic 23 Top Words:
Highest Prob: make, issu, hon, import, gentleman, look, can
FREX: issu, proper, look, obvious, certain, understand, point
Lift: canvass, launder, quasi-judici, biodivers, offhand, obvious, certain
Score: issu, gentleman, import, hon, point, make, look
Topic 24 Top Words:
Highest Prob: invest, london, region, transport, develop, constitu, project
FREX: project, rail, scienc, infrastructur, transport, research, north
Lift: duall, electrifi, skelmersdal, wigton, dawlish, electrif, stoneheng
Score: invest, transport, region, rail, scienc, project, infrastructur
Topic 25 Top Words:
Highest Prob: local, communiti, council, author, support, polic, peopl
FREX: behaviour, antisoci, local, counti, club, footbal, author
Lift: blyth, changemak, asbo, graffiti, csos, darwen, under-16
Score: local, communiti, antisoci, behaviour, council, author, polic
Topic 26 Top Words:
Highest Prob: job, work, peopl, unemploy, economi, busi, help
FREX: unemploy, employ, growth, sector, long-term, apprenticeship, creat
Lift: skipton, entrepreneuri, sector-l, sandwich, back-to-work, entrepreneur, unemploy
Score: unemploy, job, economi, sector, employ, growth, busi
Topic 27 Top Words:
Highest Prob: say, let, said, want, labour, go, question
FREX: answer, question, let, got, shadow, listen, wrong
Lift: wriggl, airbrush, mccluskey, re-hir, pre-script, beveridg, bandwagon
Score: labour, answer, question, let, gentleman, said, say
Topic 28 Top Words:
Highest Prob: tax, pay, cut, budget, famili, rate, peopl
FREX: tax, vat, low, budget, top, revenu, incom
Lift: flatter, 45p, non-domicil, 107,000, 50p, clifton, millionair
Score: tax, cut, pay, incom, rate, famili, budget
Topic 29 Top Words:
Highest Prob: industri, compani, worker, british, manufactur, job, trade
FREX: manufactur, industri, product, plant, steel, worker, car
Lift: alstom, gum, jcb, peugeot, klesch, chew, dairi
Score: industri, manufactur, compani, worker, export, farmer, farm
Topic 30 Top Words:
Highest Prob: govern, can, mani, support, peopl, countri, take
FREX: mani, support, govern, unlik, give, come, take
Lift: philip, unlik, unbeliev, mani, though, leav, despit
Score: philip, govern, mani, unlik, support, can, peopl
Highest Prob
is the raw \(\beta\) coefficientsScore
is the term-score measure we defined aboveFREX
is a measure which combines word-topic frequency with word-topic exclusivityLift
is a normalised version of the word-probabilities
Topic 3:
I suspect that many Members from all parties in this House will agree that mental health services have for too long been treated as a poor cousin a Cinderella service in the NHS and have been systematically underfunded for a long time. That is why I am delighted to say that the coalition Government have announced that we will be introducing new access and waiting time standards for mental health conditions such as have been in existence for physical health conditions for a long time. Over time, as reflected in the new NHS mandate, we must ensure that mental health is treated with equality of resources and esteem compared with any other part of the NHS.
I am sure that the Prime Minister will join me in congratulating Cheltenham and Tewkesbury primary care trust on never having had a financial deficit and on living within its means. Can he therefore explain to the professionals, patients and people of Cheltenham why we are being rewarded with the closure of our 10-year-old purpose-built maternity ward, the closure of our rehabilitation hospital, cuts in health promotion, cuts in community nursing, cuts in health visiting, cuts in access to acute care and the non-implementation of new NICE-prescribed drugs such as Herceptin?
I am sure that the Prime Minister will join me in congratulating Cheltenham and Tewkesbury primary care trust on never having had a financial deficit and on living within its means. Can he therefore explain to the professionals, patients and people of Cheltenham why we are being rewarded with the closure of our 10-year-old purpose-built maternity ward, the closure of our rehabilitation hospital, cuts in health promotion, cuts in community nursing, cuts in health visiting, cuts in access to acute care and the non-implementation of new NICE-prescribed drugs such as Herceptin?
Do MPs from different parties speak about healthcare at different rates?
On which topics do Conservative and Labour MPs differ the most?
Do liberal and conservative newspapers report on the economy in different ways?
Lucy Barnes and Tim Hicks (UCL) study the determinants of voters’ attitudes toward government deficits. They argue that individual attitudes are largely a function of media framing. They examine whether and how the Guardian (a left-leaning) and the Telegraph (a right-leaning) report on the economy.
Data and approach:
\(\approx 10,000\) newspaper articles
STM model
\(K = 6\)
Newspaper covariates for both prevalence and content
LDA, and topic models more generally, require the researcher to make several implementation decisions
In particular, we must select a value for \(K\), the number of topics
How can we select between different values of K? How can we tell how well a given topic model is performing?
Predictive metric: Held-out likelihood
Ask which words the model believes will be in a given document and comparing this to the document’s actual word composition (i.e. calculate the held-out likelihood)
E.g. Splitting texts in half, train a topic model on one half, calculate the held-out likelihood for the other half
Problem: Prediction is not always important in exploratory or descriptive tasks. We may want models that capture other aspects of the data.
Interpretational metrics
Semantic coherence
Exclusivity
Problem: The correlation between quantitative diagnostics such as these and human judgements of topic coherence is not always positive!
We can apply many of these metrics across a range of topic models using the searchK
function in the stm
package.
Word intrusion: Test if topics have semantic coherence by asking humans identify a spurious word inserted into a topic.
Topic | \(w_1\) | \(w_2\) | \(w_3\) | \(w_4\) | \(w_5\) | \(w_6\) |
---|---|---|---|---|---|---|
1 | bank | financ | terror | england | fiscal | market |
2 | europe | union | eu | referendum | vote | school |
3 | act | deliv | nhs | prison | mr | right |
Assumption: When humans find it easy to locate the “intruding” word, the topics are more coherent.
Topic intrusion: Test if the association between topics and documents makes sense by asking humans to identify a topic that was not associated with a document.
Reforms to the banking system are an essential part of dealing with the crisis, and delivering lasting and sustainable growth to the economy. Without these changes, we will be weaker, we will be less well respected abroad, and we will be poorer.
Topic | \(w_1\) | \(w_2\) | \(w_3\) | \(w_4\) | \(w_5\) | \(w_6\) |
---|---|---|---|---|---|---|
1 | bank | financ | regul | england | fiscal | market |
2 | plan | econom | growth | longterm | deliv | sector |
3 | school | educ | children | teacher | pupil | class |
Assumption: When humans find it easy to locate the “intruding” topic, the mappings are more sensible.
Conclusion:
“Topic models which perform better on held-out likelihood may infer less semantically meaningful topics.” (Chang et al. 2009.)
Semantic validity
Predictive validity
Construct validity
Implication: All these approaches require careful human reading of texts and topics, and comparison with sensible metadata.
Topic models offer an approach to automatically inferring the substantive themes that exist in a corpus of texts
A topic is described as a probability distribution over words in the vocabulary
Documents are described as a mixture of corpus wide topics
Topic models require very little up-front effort, but require extensive interpretation and validation
PUBL0099