`

Introduce to Inforamtion Retrieval读书笔记(2)

阅读更多

The term vocabulary and postings lists

Inverted index construction step:

1. Collect the documents to be indexed.
2. Tokenize the text.
3. Do linguistic preprocessing of tokens.
4. Index the documents that each term occurs in.

 

2.1 Document delineation and character sequence decoding

Encoding Problems: how to auto-dectect encoding:

Text Format Problems: docs pdf xml html and so on.

Sequence Problems : Arabic(阿拉伯语), where text takes on some two dimensional and mixed order characteristics.

 

Choosing a document unit : A precision/recall tradeoff,large document units can be alleviated by use of explicit or implicit proximity search

 

2.2 Determining the vocabulary of terms

token :tokenization is the task of chopping it up into pieces, called tokens, perhaps at the same time
throwing away certain characters, such as punctuation。

Difference between token and type:

token not exactly the same word sequence,is a instance

type is exactly the same work sequence,is a class

like the difference of OOP's class and instance.

Tokenization are language-specific : Language identification based on clas-
IDENTIFICATION sifiers that use short character subsequences as features is highly effective;

most languages have distinctive signature patterns

中文分词:最大正向/反向匹配。

专有名词识别,ip url,邮箱、电话号码识别。

2.2.2 Dropping common terms: stop words

Stop words: extremely common words has little value in helping select documents matching a user need.

How to Collect:

The general COLLECTION strategy for determining a stop list is to sort the terms by collection frequency and then to take the most frequent terms, often hand-filtered for their semantic
content relative to the domain of the documents being indexed, as a stop list.

2.2.3 Normalization (equivalence classing of terms)

Token normalization is the process of canonicalizing TOKEN tokens so that matches
 occur despite superficial differences in the character sequences of the tokens.

不同写法:anti-discriminatory and antidiscriminatory

同义词:car and automobile

Accents and diacritics

Capitalization/case-folding

2.2.4 Stemming and lemmatization

The goal of both stemming and lemmatization is to reduce inflectional
forms and sometimes derivationally related forms of a word to a common
base form。

eg.:

am, are, is ⇒be
car, cars, car’s, cars’⇒car

Some common algorithm for stemming English:

Porter stemmer、Lovins stemmer、Paice stemmer

 

2.3 Faster postings list intersection via skip pointers

Postings lists intersection with skip pointers:

INTERSECTWITHSKIPS(p1, p2)
1 answer ← ()
2 while p1 != NIL and p2 != NIL
3 do if docID(p1) = docID(p2)
4 then ADD(answer, docID(p1))
5 p1 ← next(p1)
6 p2 ← next(p2)
7 else if docID(p1) < docID(p2)
8 then if hasSkip(p1) and (docID(skip(p1) ≤ docID(p2)))
9 then while hasSkip(p1) and (docID(skip(p1) ≤ docID(p2)))
10 do p1 ← skip(p1)
11 else p1 ← next(p1)
12 else if hasSkip(p2) and (docID(skip(p2) ≤ docID(p1)))
13 then while hasSkip(p2) and (docID(skip(p2) ≤ docID(p1)))
14 do p2 ← skip(p2)
15 else p2 ← next(p2)
16 return answer

 2.4 Positional postings and phrase queries

Biword indexes : One approach to handling phrases is to consider every pair of consecutive
terms in a document as a phrase.(Not a standard solution)

Biword Extension:

The concept of a biword index can be extended to longer sequences of
words, and if the index includes variable length word sequences, it is generally
referred to as a phrase index

 

Positional indexes :(most commonly employed)

store postings of the form docID: <position1, position2, . . . >

 

An algorithm for proximity intersection of postings lists p1 and p2:

 

POSITIONALINTERSECT(p1, p2, k)
1 answer ← ()
2 while p1 != NIL and p2 != NIL
3 do if docID(p1) = docID(p2)
4 then l ← ()
5 pp1 ← positions(p1)
6 pp2 ← positions(p2)
7 while pp1 != NIL
8 do while pp2 != NIL
9 do if |pos(pp1) − pos(pp2)| > k
10 then break
11 else ADD(l, pos(pp2))
12 pp2 ← next(pp2)
13 while l != () and |l[0] − pos(pp1)| > k
14 do DELETE(l[0])
15 for each ps ∈ l
16 do ADD(answer, hdocID(p1), pos(pp1), psi)
17 pp1 ← next(pp1)
18 p1 ← next(p1)
19 p2 ← next(p2)
20 else if docID(p1) < docID(p2)
21 then p1 ← next(p1)
22 else p2 ← next(p2)
23 return answer
 

Combination schemes :

Combination of biword indexes and positional indexes。

 

0
0
分享到:
评论

相关推荐

Global site tag (gtag.js) - Google Analytics