SlideShare ist ein Scribd-Unternehmen logo
1 von 49
Finding Similar Items:
Locality Sensitive Hashing
Viet-Trung Tran
Credits
• Mining of Massive Datasets
– Jure Leskovec, Anand Rajaraman, Jeff Ullman Stanford
University
– http://www.mmds.org
2
Scene Completion Problem
3
[Hays and Efros, SIGGRAPH 2007]
10 nearest neighbors from a collection of 20,000 images
Scene Completion Problem
4
[Hays and Efros, SIGGRAPH 2007]
10 nearest neighbors from a collection of 2 million images
Scene Completion Problem
5
[Hays and Efros, SIGGRAPH 2007]
A Common Metaphor
• Many problems can be expressed as
finding “similar” sets:
– Find near-neighbors in high-dimensional space
• Examples:
– Pages with similar words
• For duplicate detection, classification by topic
– Customers who purchased similar products
• Products with similar customer sets
– Images with similar features
• Users who visited similar websites
6
Problem
• Given high dimensional data points: x1,x2,...
– E.g. Image is a long vector of pixel colors
• And some distance function d(x1,x2)
• Goal: find all pairs of data points (xi;xj) that are
within some distance threshold d(xi;xj) <= s
• Naive solution: O(n2)
• Can it to be done in O(n)?
7
Relation to Previous Lecture
• Last time: Finding frequent pairs
8
Items1…N
Items 1…N
Count of pair {i,j}
in the data
NaĂŻve solution:
Single pass but requires
space quadratic in the
number of items
Items1…K
Items 1…K
Count of pair {i,j}
in the data
A-Priori:
First pass: Find frequent singletons
For a pair to be a frequent pair
candidate, its singletons have to be
frequent!
Second pass:
Count only candidate pairs!
N … number of distinct items
K … number of items with support  s
Relation to Previous Lecture
• Last time: Finding frequent pairs
• Further improvement: PCY
– Pass 1:
• Count exact frequency of each item:
• Take pairs of items {i,j}, hash them into B buckets and count of the
number of pairs that hashed to each bucket:
9
Items 1…N
Basket 1: {1,2,3}
Pairs: {1,2} {1,3} {2,3}
Buckets 1…B
2 1
Relation to Previous Lecture
• Last time: Finding frequent pairs
• Further improvement: PCY
– Pass 1:
• Count exact frequency of each item:
• Take pairs of items {i,j}, hash them into B buckets and count of the
number of pairs that hashed to each bucket:
– Pass 2:
• For a pair {i,j} to be a candidate for
a frequent pair, its singletons {i}, {j}
have to be frequent and the pair
has to hash to a frequent bucket!
10
Items 1…N
Basket 1: {1,2,3}
Pairs: {1,2} {1,3} {2,3}
Basket 2: {1,2,4}
Pairs: {1,2} {1,4} {2,4}
Buckets 1…B
3 1 2
Approaches
• A-Priori
– Main idea: Candidates
– Instead of keeping a count of each pair, only keep a count
of candidate pairs!
• Find pairs of similar docs
– Main idea: Candidates
– Pass 1: Take documents and hash them to buckets such that
documents that are similar hash to the same bucket
– Pass 2: Only compare documents that are candidates
(i.e., they hashed to a same bucket)
– Benefits: Instead of O(N2) comparisons, we need O(N)
comparisons to find similar documents
11
Finding Similar Items
Distance Measures
 Goal: Find near-neighbors in high-dim. space
– We formally define “near neighbors” as
points that are a “small distance” apart
• For each application, we first need to define what
“distance” means
• Today: Jaccard distance/similarity
– The Jaccard similarity of two sets is the size of their
intersection divided by the size of their union:
sim(C1, C2) = |C1C2|/|C1C2|
– Jaccard distance: d(C1, C2) = 1 - |C1C2|/|C1C2|
13
3 in intersection
8 in union
Jaccard similarity= 3/8
Jaccard distance = 5/8
Task: Finding Similar Documents
• Goal: Given a large number ( in the millions or billions)
of documents, find “near duplicate” pairs
• Applications:
– Mirror websites, or approximate mirrors
• Don’t want to show both in search results
– Similar news articles at many news sites
• Cluster articles by “same story”
• Problems:
– Many small pieces of one document can appear
out of order in another
– Too many documents to compare all pairs
– Documents are so large or so many that they cannot
fit in main memory
14
3 Essential Steps for Similar Docs
1. Shingling: Convert documents to sets
2. Min-Hashing: Convert large sets to short
signatures, while preserving similarity
3. Locality-Sensitive Hashing: Focus on
pairs of signatures likely to be from
similar documents
– Candidate pairs!
15
The Big Picture
16
Docu-
ment
The set
of strings
of length k
that appear
in the doc-
ument
Signatures:
short integer
vectors that
represent the
sets, and
reflect their
similarity
Locality-
Sensitive
Hashing
Candidate
pairs:
those pairs
of signatures
that we need
to test for
similarity
Shingling
Step 1: Shingling: Convert
documents to sets
Docu-
ment
The set
of strings
of length k
that appear
in the doc-
ument
Documents as High-Dim Data
• Step 1: Shingling: Convert documents to sets
• Simple approaches:
– Document = set of words appearing in document
– Document = set of “important” words
– Don’t work well for this application. Why?
• Need to account for ordering of words!
• A different way: Shingles!
18
Define: Shingles
• A k-shingle (or k-gram) for a document is a
sequence of k tokens that appears in the doc
– Tokens can be characters, words or something else,
depending on the application
– Assume tokens = characters for examples
• Example: k=2; document D1 = abcab
Set of 2-shingles: S(D1) = {ab, bc, ca}
– Option: Shingles as a bag (multiset), count ab twice:
S’(D1) = {ab, bc, ca, ab}
19
Compressing Shingles
• To compress long shingles, we can hash them to
32 bits
• Represent a document by the set of hash values
of its k-shingles
– Idea: Two documents could (rarely) appear to have
shingles in common, when in fact only the hash-values
were shared
• Example: k=2; document D1= abcab
Set of 2-shingles: S(D1) = {ab, bc, ca}
Hash the singles: h(D1) = {1, 5, 7}
20
Similarity Metric for Shingles
• Document D1 is a set of its k-shingles C1=S(D1)
• Equivalently, each document is a
0/1 vector in the space of k-shingles
– Each unique shingle is a dimension
– Vectors are very sparse
• A natural similarity measure is the
Jaccard similarity:
sim(D1, D2) = |C1C2|/|C1C2|
21
Working Assumption
• Documents that have lots of shingles in
common have similar text, even if the text
appears in different order
• Caveat: You must pick k large enough, or most
documents will have most shingles
– k = 5 is OK for short documents
– k = 10 is better for long documents
22
Motivation for Minhash/LSH
• Suppose we need to find near-duplicate
documents among 𝑵 = 𝟏 million documents
• Naïvely, we would have to compute pairwise
Jaccard similarities for every pair of docs
– 𝑵(𝑵 − 𝟏)/𝟐 ≈ 5*1011 comparisons
– At 105 secs/day and 106 comparisons/sec,
it would take 5 days
• For 𝑵 = 𝟏𝟎 million, it takes more than a year…
23
MinHashing
Step 2: Minhashing: Convert large sets
to short signatures, while preserving
similarity
Docu-
ment
The set
of strings
of length k
that appear
in the doc-
ument
Signatures:
short integer
vectors that
represent the
sets, and
reflect their
similarity
Encoding Sets as Bit Vectors
• Encode sets using 0/1 (bit, boolean) vectors
– One dimension per element in the universal set
• Interpret set intersection as bitwise AND, and
set union as bitwise OR
• Example: C1 = 10111; C2 = 10011
– Size of intersection = 3; size of union = 4,
– Jaccard similarity (not distance) = 3/4
– Distance: d(C1,C2) = 1 – (Jaccard similarity) = 1/4
25
From Sets to Boolean Matrices
• Rows = elements (shingles)
• Columns = sets (documents)
– 1 in row e and column s if and only if e is a
member of s
– Column similarity is the Jaccard similarity of
the corresponding sets (rows with value 1)
– Typical matrix is sparse!
• Each document is a column:
– Example: sim(C1 ,C2) = ?
• Size of intersection = 3; size of union = 6,
Jaccard similarity (not distance) = 3/6
• d(C1,C2) = 1 – (Jaccard similarity) = 3/6
26
0101
0111
1001
1000
1010
1011
0111
Documents
Shingles
Outline: Finding Similar Columns
• So far:
– Documents  Sets of shingles
– Represent sets as boolean vectors in a matrix
• Next goal: Find similar columns while
computing small signatures
– Similarity of columns == similarity of signatures
27
Outline: Finding Similar Columns
• Next Goal: Find similar columns, Small signatures
• Naïve approach:
– 1) Signatures of columns: small summaries of columns
– 2) Examine pairs of signatures to find similar columns
– 3) Optional: Check that columns with similar signatures are
really similar
• Warnings:
– Comparing all pairs may take too much time: Job for LSH
• These methods can produce false negatives, and even false
positives (if the optional check is not made)
28
Hashing Columns (Signatures)
• Key idea: “hash” each column C to a small signature
h(C), such that:
– (1) h(C) is small enough that the signature fits in RAM
– (2) sim(C1, C2) is the same as the “similarity” of signatures
h(C1) and h(C2)
• Goal: Find a hash function h(·) such that:
– If sim(C1,C2) is high, then with high prob. h(C1) = h(C2)
– If sim(C1,C2) is low, then with high prob. h(C1) ≠ h(C2)
• Hash docs into buckets. Expect that “most” pairs of
near duplicate docs hash into the same bucket!
29
Min-Hashing
• Goal: Find a hash function h(·) such that:
– if sim(C1,C2) is high, then with high prob. h(C1) = h(C2)
– if sim(C1,C2) is low, then with high prob. h(C1) ≠ h(C2)
• Clearly, the hash function depends on
the similarity metric:
– Not all similarity metrics have a suitable
hash function
• There is a suitable hash function for
the Jaccard similarity: It is called Min-Hashing
30
Min-Hashing
• Imagine the rows of the boolean matrix permuted
under random permutation 
• Define a “hash” function h(C) = the index of the
first (in the permuted order ) row in which column
C has value 1:
h (C) = min (C)
• Use several (e.g., 100) independent hash functions
(that is, permutations) to create a signature of a
column
31
Min-Hashing Example
32
3
4
7
2
6
1
5
Signature matrix M
1212
5
7
6
3
1
2
4
1412
4
5
1
6
7
3
2
2121
2nd element of the permutation
is the first to map to a 1
4th element of the permutation
is the first to map to a 1
0101
0101
1010
1010
1010
1001
0101
Input matrix (Shingles x Documents)Permutation 
Similarity for Signatures
• We know: Pr[h(C1) = h(C2)] = sim(C1, C2)
• Now generalize to multiple hash functions
• The similarity of two signatures is the fraction of
the hash functions in which they agree
• Note: Because of the Min-Hash property, the
similarity of columns is the same as the expected
similarity of their signatures
33
Similarity
34
Similarities:
1-3 2-4 1-2 3-4
Col/Col 0.75 0.75 0 0
Sig/Sig 0.67 1.00 0 0
Signature matrix M
1212
5
7
6
3
1
2
4
1412
4
5
1
6
7
3
2
2121
0101
0101
1010
1010
1010
1001
0101
Input matrix (Shingles x Documents)
3
4
7
2
6
1
5
Permutation 
Implementation Trick
• Row hashing!
– Pick K = 100 hash functions ki
– Ordering under ki gives a random row permutation!
• One-pass implementation
– For each column C and hash-func. ki keep a “slot” for the
min-hash value
– Initialize all sig(C)[i] = 
– Scan rows looking for 1s
• Suppose row j has 1 in column C
• Then for each ki :
– If ki(j) < sig(C)[i], then sig(C)[i]  ki(j)
35
How to pick a random
hash function h(x)?
Universal hashing:
ha,b(x)=((a¡x+b) mod p) mod N
where:
a,b … random integers
p … prime number (p > N)
Locality Sensitive Hashing
Step 3: Locality-Sensitive Hashing:
Focus on pairs of signatures likely to be
from similar documents
Docu-
ment
The set
of strings
of length k
that appear
in the doc-
ument
Signatures:
short integer
vectors that
represent the
sets, and
reflect their
similarity
Locality-
Sensitive
Hashing
Candidate
pairs:
those pairs
of signatures
that we need
to test for
similarity
LSH: First Cut
• Goal: Find documents with Jaccard similarity at
least s (for some similarity threshold, e.g., s=0.8)
• LSH – General idea: Use a function f(x,y) that tells
whether x and y is a candidate pair: a pair of
elements whose similarity must be evaluated
• For Min-Hash matrices:
– Hash columns of signature matrix M to many buckets
– Each pair of documents that hashes into the
same bucket is a candidate pair
37
1212
1412
2121
Partition M into b Bands
38
Signature matrix M
r rows
per band
b bands
One
signature
1212
1412
2121
Partition M into Bands
• Divide matrix M into b bands of r rows
• For each band, hash its portion of each
column to a hash table with k buckets
– Make k as large as possible
• Candidate column pairs are those that hash
to the same bucket for ≥ 1 band
• Tune b and r to catch most similar pairs,
but few non-similar pairs
39
Matrix M
r rows b bands
Buckets
Columns 2 and 6
are probably identical
(candidate pair)
Columns 6 and 7 are
surely different.
Hashing Bands
40
Simplifying Assumption
• There are enough buckets that columns are
unlikely to hash to the same bucket unless they are
identical in a particular band
• Hereafter, we assume that “same bucket” means
“identical in that band”
• Assumption needed only to simplify analysis, not for
correctness of algorithm
41
Example of Bands
Assume the following case:
• Suppose 100,000 columns of M (100k docs)
• Signatures of 100 integers (rows)
• Therefore, signatures take 40Mb
• Choose b = 20 bands of r = 5 integers/band
• Goal: Find pairs of documents that
are at least s = 0.8 similar
42
1212
1412
2121
C1, C2 are 80% Similar
• Find pairs of  s=0.8 similarity, set b=20, r=5
• Assume: sim(C1, C2) = 0.8
– Since sim(C1, C2)  s, we want C1, C2 to be a candidate pair:
We want them to hash to at least 1 common bucket (at least
one band is identical)
• Probability C1, C2 identical in one particular
band: (0.8)5 = 0.328
• Probability C1, C2 are not similar in all of the 20 bands:
(1-0.328)20 = 0.00035
– i.e., about 1/3000th of the 80%-similar column pairs
are false negatives (we miss them)
– We would find 99.965% pairs of truly similar documents
43
1212
1412
2121
C1, C2 are 30% Similar
• Find pairs of  s=0.8 similarity, set b=20, r=5
• Assume: sim(C1, C2) = 0.3
– Since sim(C1, C2) < s we want C1, C2 to hash to NO
common buckets (all bands should be different)
• Probability C1, C2 identical in one particular band:
(0.3)5 = 0.00243
• Probability C1, C2 identical in at least 1 of 20 bands: 1 -
(1 - 0.00243)20 = 0.0474
– In other words, approximately 4.74% pairs of docs with
similarity 0.3% end up becoming candidate pairs
• They are false positives since we will have to examine them (they
are candidate pairs) but then it will turn out their similarity is below
threshold s
44
1212
1412
2121
LSH Involves a Tradeoff
• Pick:
– The number of Min-Hashes (rows of M)
– The number of bands b, and
– The number of rows r per band
to balance false positives/negatives
• Example: If we had only 15 bands of 5 rows,
the number of false positives would go down,
but the number of false negatives would go
up
45
1212
1412
2121
What b Bands of r Rows Gives
You
46
t r
All rows
of a band
are equal
1 -
Some row
of a band
unequal
( )b
No bands
identical
1 -
At least
one band
identical
s ~ (1/b)1/r
Similarity t=sim(C1, C2) of two sets
Probability
of sharing
a bucket
Example: b = 20; r = 5
• Similarity threshold s
• Prob. that at least 1 band is
identical:
47
s 1-(1-sr)b
.2 .006
.3 .047
.4 .186
.5 .470
.6 .802
.7 .975
.8 .9996
LSH Summary
• Tune M, b, r to get almost all pairs with similar
signatures, but eliminate most pairs that do not
have similar signatures
• Check in main memory that candidate pairs
really do have similar signatures
• Optional: In another pass through data, check
that the remaining candidate pairs really
represent similar documents
48
Summary: 3 Steps
• Shingling: Convert documents to sets
– We used hashing to assign each shingle an ID
• Min-Hashing: Convert large sets to short signatures,
while preserving similarity
– We used similarity preserving hashing to generate
signatures with property Pr[h(C1) = h(C2)] = sim(C1, C2)
– We used hashing to get around generating random
permutations
• Locality-Sensitive Hashing: Focus on pairs of
signatures likely to be from similar documents
– We used hashing to find candidate pairs of similarity  s
49

Weitere ähnliche Inhalte

Was ist angesagt?

A brief introduction to mutual information and its application
A brief introduction to mutual information and its applicationA brief introduction to mutual information and its application
A brief introduction to mutual information and its applicationHyun-hwan Jeong
 
Lect6 Association rule & Apriori algorithm
Lect6 Association rule & Apriori algorithmLect6 Association rule & Apriori algorithm
Lect6 Association rule & Apriori algorithmhktripathy
 
Data mining Measuring similarity and desimilarity
Data mining Measuring similarity and desimilarityData mining Measuring similarity and desimilarity
Data mining Measuring similarity and desimilarityRushali Deshmukh
 
Information retrieval 9 tf idf weights
Information retrieval 9 tf idf weightsInformation retrieval 9 tf idf weights
Information retrieval 9 tf idf weightsVaibhav Khanna
 
Vector space model of information retrieval
Vector space model of information retrievalVector space model of information retrieval
Vector space model of information retrievalNanthini Dominique
 
Data Mining: Concepts and Techniques (3rd ed.) - Chapter 3 preprocessing
Data Mining:  Concepts and Techniques (3rd ed.)- Chapter 3 preprocessingData Mining:  Concepts and Techniques (3rd ed.)- Chapter 3 preprocessing
Data Mining: Concepts and Techniques (3rd ed.) - Chapter 3 preprocessingSalah Amean
 
Unit 3 part ii Data mining
Unit 3 part ii Data miningUnit 3 part ii Data mining
Unit 3 part ii Data miningDhilsath Fathima
 
Lecture 4 Decision Trees (2): Entropy, Information Gain, Gain Ratio
Lecture 4 Decision Trees (2): Entropy, Information Gain, Gain RatioLecture 4 Decision Trees (2): Entropy, Information Gain, Gain Ratio
Lecture 4 Decision Trees (2): Entropy, Information Gain, Gain RatioMarina Santini
 
01 Data Mining: Concepts and Techniques, 2nd ed.
01 Data Mining: Concepts and Techniques, 2nd ed.01 Data Mining: Concepts and Techniques, 2nd ed.
01 Data Mining: Concepts and Techniques, 2nd ed.Institute of Technology Telkom
 
Collaborative filtering
Collaborative filteringCollaborative filtering
Collaborative filteringNeha Kulkarni
 
Boolean,vector space retrieval Models
Boolean,vector space retrieval Models Boolean,vector space retrieval Models
Boolean,vector space retrieval Models Primya Tamil
 
Natural Language Processing with Python
Natural Language Processing with PythonNatural Language Processing with Python
Natural Language Processing with PythonBenjamin Bengfort
 
EMNLP 2014: Opinion Mining with Deep Recurrent Neural Network
EMNLP 2014: Opinion Mining with Deep Recurrent Neural NetworkEMNLP 2014: Opinion Mining with Deep Recurrent Neural Network
EMNLP 2014: Opinion Mining with Deep Recurrent Neural NetworkPeinan ZHANG
 
Data Mining Techniques
Data Mining TechniquesData Mining Techniques
Data Mining TechniquesSanzid Kawsar
 
Support Vector Machines ( SVM )
Support Vector Machines ( SVM ) Support Vector Machines ( SVM )
Support Vector Machines ( SVM ) Mohammad Junaid Khan
 
Term weighting
Term weightingTerm weighting
Term weightingPrimya Tamil
 
Classification in data mining
Classification in data mining Classification in data mining
Classification in data mining Sulman Ahmed
 
K-Nearest Neighbor Classifier
K-Nearest Neighbor ClassifierK-Nearest Neighbor Classifier
K-Nearest Neighbor ClassifierNeha Kulkarni
 

Was ist angesagt? (20)

A brief introduction to mutual information and its application
A brief introduction to mutual information and its applicationA brief introduction to mutual information and its application
A brief introduction to mutual information and its application
 
Inverted index
Inverted indexInverted index
Inverted index
 
Lect6 Association rule & Apriori algorithm
Lect6 Association rule & Apriori algorithmLect6 Association rule & Apriori algorithm
Lect6 Association rule & Apriori algorithm
 
Data mining Measuring similarity and desimilarity
Data mining Measuring similarity and desimilarityData mining Measuring similarity and desimilarity
Data mining Measuring similarity and desimilarity
 
Information retrieval 9 tf idf weights
Information retrieval 9 tf idf weightsInformation retrieval 9 tf idf weights
Information retrieval 9 tf idf weights
 
Vector space model of information retrieval
Vector space model of information retrievalVector space model of information retrieval
Vector space model of information retrieval
 
Data Mining: Concepts and Techniques (3rd ed.) - Chapter 3 preprocessing
Data Mining:  Concepts and Techniques (3rd ed.)- Chapter 3 preprocessingData Mining:  Concepts and Techniques (3rd ed.)- Chapter 3 preprocessing
Data Mining: Concepts and Techniques (3rd ed.) - Chapter 3 preprocessing
 
Unit 3 part ii Data mining
Unit 3 part ii Data miningUnit 3 part ii Data mining
Unit 3 part ii Data mining
 
Lecture 4 Decision Trees (2): Entropy, Information Gain, Gain Ratio
Lecture 4 Decision Trees (2): Entropy, Information Gain, Gain RatioLecture 4 Decision Trees (2): Entropy, Information Gain, Gain Ratio
Lecture 4 Decision Trees (2): Entropy, Information Gain, Gain Ratio
 
01 Data Mining: Concepts and Techniques, 2nd ed.
01 Data Mining: Concepts and Techniques, 2nd ed.01 Data Mining: Concepts and Techniques, 2nd ed.
01 Data Mining: Concepts and Techniques, 2nd ed.
 
Collaborative filtering
Collaborative filteringCollaborative filtering
Collaborative filtering
 
Boolean,vector space retrieval Models
Boolean,vector space retrieval Models Boolean,vector space retrieval Models
Boolean,vector space retrieval Models
 
Natural Language Processing with Python
Natural Language Processing with PythonNatural Language Processing with Python
Natural Language Processing with Python
 
EMNLP 2014: Opinion Mining with Deep Recurrent Neural Network
EMNLP 2014: Opinion Mining with Deep Recurrent Neural NetworkEMNLP 2014: Opinion Mining with Deep Recurrent Neural Network
EMNLP 2014: Opinion Mining with Deep Recurrent Neural Network
 
K Nearest Neighbors
K Nearest NeighborsK Nearest Neighbors
K Nearest Neighbors
 
Data Mining Techniques
Data Mining TechniquesData Mining Techniques
Data Mining Techniques
 
Support Vector Machines ( SVM )
Support Vector Machines ( SVM ) Support Vector Machines ( SVM )
Support Vector Machines ( SVM )
 
Term weighting
Term weightingTerm weighting
Term weighting
 
Classification in data mining
Classification in data mining Classification in data mining
Classification in data mining
 
K-Nearest Neighbor Classifier
K-Nearest Neighbor ClassifierK-Nearest Neighbor Classifier
K-Nearest Neighbor Classifier
 

Ähnlich wie 3 - Finding similar items

Tutorial 4 (duplicate detection)
Tutorial 4 (duplicate detection)Tutorial 4 (duplicate detection)
Tutorial 4 (duplicate detection)Kira
 
similarity1 (6).ppt
similarity1 (6).pptsimilarity1 (6).ppt
similarity1 (6).pptssuserf11a32
 
UnSupervised Machincs4811-ch23a-clustering.ppt
UnSupervised Machincs4811-ch23a-clustering.pptUnSupervised Machincs4811-ch23a-clustering.ppt
UnSupervised Machincs4811-ch23a-clustering.pptRamanamurthy Banda
 
Binary Similarity : Theory, Algorithms and Tool Evaluation
Binary Similarity :  Theory, Algorithms and  Tool EvaluationBinary Similarity :  Theory, Algorithms and  Tool Evaluation
Binary Similarity : Theory, Algorithms and Tool EvaluationLiwei Ren任力偉
 
Ch03 Mining Massive Data Sets stanford
Ch03 Mining Massive Data Sets  stanfordCh03 Mining Massive Data Sets  stanford
Ch03 Mining Massive Data Sets stanfordSakthivel C R
 
Editors l21 l24
Editors l21 l24Editors l21 l24
Editors l21 l24Neha Pachauri
 
Exploiting the query structure for efficient join ordering in SPARQL queries
Exploiting the query structure for efficient join ordering in SPARQL queriesExploiting the query structure for efficient join ordering in SPARQL queries
Exploiting the query structure for efficient join ordering in SPARQL queriesLuiz Henrique Zambom Santana
 
LSI latent (par HATOUM Saria et DONGO ESCALANTE Irvin Franco)
LSI latent (par HATOUM Saria et DONGO ESCALANTE Irvin Franco)LSI latent (par HATOUM Saria et DONGO ESCALANTE Irvin Franco)
LSI latent (par HATOUM Saria et DONGO ESCALANTE Irvin Franco)rchbeir
 
NDD Project presentation
NDD Project presentationNDD Project presentation
NDD Project presentationahmedmishfaq
 
Finding similar items in high dimensional spaces locality sensitive hashing
Finding similar items in high dimensional spaces  locality sensitive hashingFinding similar items in high dimensional spaces  locality sensitive hashing
Finding similar items in high dimensional spaces locality sensitive hashingDmitriy Selivanov
 
Дмитрий Селиванов, OK.RU. Finding Similar Items in high-dimensional spaces: L...
Дмитрий Селиванов, OK.RU. Finding Similar Items in high-dimensional spaces: L...Дмитрий Селиванов, OK.RU. Finding Similar Items in high-dimensional spaces: L...
Дмитрий Селиванов, OK.RU. Finding Similar Items in high-dimensional spaces: L...Mail.ru Group
 
Deduplication on large amounts of code
Deduplication on large amounts of codeDeduplication on large amounts of code
Deduplication on large amounts of codesource{d}
 
Bytewise Approximate Match: Theory, Algorithms and Applications
Bytewise Approximate Match:  Theory, Algorithms and ApplicationsBytewise Approximate Match:  Theory, Algorithms and Applications
Bytewise Approximate Match: Theory, Algorithms and ApplicationsLiwei Ren任力偉
 
Bekas for cognitive_speaker_series
Bekas for cognitive_speaker_seriesBekas for cognitive_speaker_series
Bekas for cognitive_speaker_seriesdiannepatricia
 
Bekas for cognitive_speaker_series
Bekas for cognitive_speaker_seriesBekas for cognitive_speaker_series
Bekas for cognitive_speaker_seriesdiannepatricia
 

Ähnlich wie 3 - Finding similar items (20)

Tutorial 4 (duplicate detection)
Tutorial 4 (duplicate detection)Tutorial 4 (duplicate detection)
Tutorial 4 (duplicate detection)
 
similarity1 (6).ppt
similarity1 (6).pptsimilarity1 (6).ppt
similarity1 (6).ppt
 
UnSupervised Machincs4811-ch23a-clustering.ppt
UnSupervised Machincs4811-ch23a-clustering.pptUnSupervised Machincs4811-ch23a-clustering.ppt
UnSupervised Machincs4811-ch23a-clustering.ppt
 
Binary Similarity : Theory, Algorithms and Tool Evaluation
Binary Similarity :  Theory, Algorithms and  Tool EvaluationBinary Similarity :  Theory, Algorithms and  Tool Evaluation
Binary Similarity : Theory, Algorithms and Tool Evaluation
 
Ch03 Mining Massive Data Sets stanford
Ch03 Mining Massive Data Sets  stanfordCh03 Mining Massive Data Sets  stanford
Ch03 Mining Massive Data Sets stanford
 
Nn
NnNn
Nn
 
Editors l21 l24
Editors l21 l24Editors l21 l24
Editors l21 l24
 
Exploiting the query structure for efficient join ordering in SPARQL queries
Exploiting the query structure for efficient join ordering in SPARQL queriesExploiting the query structure for efficient join ordering in SPARQL queries
Exploiting the query structure for efficient join ordering in SPARQL queries
 
LSI latent (par HATOUM Saria et DONGO ESCALANTE Irvin Franco)
LSI latent (par HATOUM Saria et DONGO ESCALANTE Irvin Franco)LSI latent (par HATOUM Saria et DONGO ESCALANTE Irvin Franco)
LSI latent (par HATOUM Saria et DONGO ESCALANTE Irvin Franco)
 
NDD Project presentation
NDD Project presentationNDD Project presentation
NDD Project presentation
 
Finding similar items in high dimensional spaces locality sensitive hashing
Finding similar items in high dimensional spaces  locality sensitive hashingFinding similar items in high dimensional spaces  locality sensitive hashing
Finding similar items in high dimensional spaces locality sensitive hashing
 
Дмитрий Селиванов, OK.RU. Finding Similar Items in high-dimensional spaces: L...
Дмитрий Селиванов, OK.RU. Finding Similar Items in high-dimensional spaces: L...Дмитрий Селиванов, OK.RU. Finding Similar Items in high-dimensional spaces: L...
Дмитрий Селиванов, OK.RU. Finding Similar Items in high-dimensional spaces: L...
 
Lec1
Lec1Lec1
Lec1
 
Deduplication on large amounts of code
Deduplication on large amounts of codeDeduplication on large amounts of code
Deduplication on large amounts of code
 
Bytewise Approximate Match: Theory, Algorithms and Applications
Bytewise Approximate Match:  Theory, Algorithms and ApplicationsBytewise Approximate Match:  Theory, Algorithms and Applications
Bytewise Approximate Match: Theory, Algorithms and Applications
 
Bekas for cognitive_speaker_series
Bekas for cognitive_speaker_seriesBekas for cognitive_speaker_series
Bekas for cognitive_speaker_series
 
Bekas for cognitive_speaker_series
Bekas for cognitive_speaker_seriesBekas for cognitive_speaker_series
Bekas for cognitive_speaker_series
 
Lecture2-DT.pptx
Lecture2-DT.pptxLecture2-DT.pptx
Lecture2-DT.pptx
 
Lecture24
Lecture24Lecture24
Lecture24
 
Clique and sting
Clique and stingClique and sting
Clique and sting
 

Mehr von Viet-Trung TRAN

Bắt đầu tìm hiểu về dữ liệu lớn như thế nào - 2017
Bắt đầu tìm hiểu về dữ liệu lớn như thế nào - 2017Bắt đầu tìm hiểu về dữ liệu lớn như thế nào - 2017
Bắt đầu tìm hiểu về dữ liệu lớn như thế nào - 2017Viet-Trung TRAN
 
Dynamo: Amazon’s Highly Available Key-value Store
Dynamo: Amazon’s Highly Available Key-value StoreDynamo: Amazon’s Highly Available Key-value Store
Dynamo: Amazon’s Highly Available Key-value StoreViet-Trung TRAN
 
Pregel: Hệ thống xử lý đồ thị lớn
Pregel: Hệ thống xử lý đồ thị lớnPregel: Hệ thống xử lý đồ thị lớn
Pregel: Hệ thống xử lý đồ thị lớnViet-Trung TRAN
 
Mapreduce simplified-data-processing
Mapreduce simplified-data-processingMapreduce simplified-data-processing
Mapreduce simplified-data-processingViet-Trung TRAN
 
Tìm kiếm needle trong Haystack: Hệ thống lưu trữ ảnh của Facebook
Tìm kiếm needle trong Haystack: Hệ thống lưu trữ ảnh của FacebookTìm kiếm needle trong Haystack: Hệ thống lưu trữ ảnh của Facebook
Tìm kiếm needle trong Haystack: Hệ thống lưu trữ ảnh của FacebookViet-Trung TRAN
 
giasan.vn real-estate analytics: a Vietnam case study
giasan.vn real-estate analytics: a Vietnam case studygiasan.vn real-estate analytics: a Vietnam case study
giasan.vn real-estate analytics: a Vietnam case studyViet-Trung TRAN
 
A Vietnamese Language Model Based on Recurrent Neural Network
A Vietnamese Language Model Based on Recurrent Neural NetworkA Vietnamese Language Model Based on Recurrent Neural Network
A Vietnamese Language Model Based on Recurrent Neural NetworkViet-Trung TRAN
 
A Vietnamese Language Model Based on Recurrent Neural Network
A Vietnamese Language Model Based on Recurrent Neural NetworkA Vietnamese Language Model Based on Recurrent Neural Network
A Vietnamese Language Model Based on Recurrent Neural NetworkViet-Trung TRAN
 
Large-Scale Geographically Weighted Regression on Spark
Large-Scale Geographically Weighted Regression on SparkLarge-Scale Geographically Weighted Regression on Spark
Large-Scale Geographically Weighted Regression on SparkViet-Trung TRAN
 
Recent progress on distributing deep learning
Recent progress on distributing deep learningRecent progress on distributing deep learning
Recent progress on distributing deep learningViet-Trung TRAN
 
success factors for project proposals
success factors for project proposalssuccess factors for project proposals
success factors for project proposalsViet-Trung TRAN
 
GPSinsights poster
GPSinsights posterGPSinsights poster
GPSinsights posterViet-Trung TRAN
 
OCR processing with deep learning: Apply to Vietnamese documents
OCR processing with deep learning: Apply to Vietnamese documents OCR processing with deep learning: Apply to Vietnamese documents
OCR processing with deep learning: Apply to Vietnamese documents Viet-Trung TRAN
 
Paper@Soict2015: GPSInsights: towards a scalable framework for mining massive...
Paper@Soict2015: GPSInsights: towards a scalable framework for mining massive...Paper@Soict2015: GPSInsights: towards a scalable framework for mining massive...
Paper@Soict2015: GPSInsights: towards a scalable framework for mining massive...Viet-Trung TRAN
 
Deep learning for nlp
Deep learning for nlpDeep learning for nlp
Deep learning for nlpViet-Trung TRAN
 
Introduction to BigData @TCTK2015
Introduction to BigData @TCTK2015Introduction to BigData @TCTK2015
Introduction to BigData @TCTK2015Viet-Trung TRAN
 
From neural networks to deep learning
From neural networks to deep learningFrom neural networks to deep learning
From neural networks to deep learningViet-Trung TRAN
 
From decision trees to random forests
From decision trees to random forestsFrom decision trees to random forests
From decision trees to random forestsViet-Trung TRAN
 
Recommender systems: Content-based and collaborative filtering
Recommender systems: Content-based and collaborative filteringRecommender systems: Content-based and collaborative filtering
Recommender systems: Content-based and collaborative filteringViet-Trung TRAN
 

Mehr von Viet-Trung TRAN (20)

Bắt đầu tìm hiểu về dữ liệu lớn như thế nào - 2017
Bắt đầu tìm hiểu về dữ liệu lớn như thế nào - 2017Bắt đầu tìm hiểu về dữ liệu lớn như thế nào - 2017
Bắt đầu tìm hiểu về dữ liệu lớn như thế nào - 2017
 
Dynamo: Amazon’s Highly Available Key-value Store
Dynamo: Amazon’s Highly Available Key-value StoreDynamo: Amazon’s Highly Available Key-value Store
Dynamo: Amazon’s Highly Available Key-value Store
 
Pregel: Hệ thống xử lý đồ thị lớn
Pregel: Hệ thống xử lý đồ thị lớnPregel: Hệ thống xử lý đồ thị lớn
Pregel: Hệ thống xử lý đồ thị lớn
 
Mapreduce simplified-data-processing
Mapreduce simplified-data-processingMapreduce simplified-data-processing
Mapreduce simplified-data-processing
 
Tìm kiếm needle trong Haystack: Hệ thống lưu trữ ảnh của Facebook
Tìm kiếm needle trong Haystack: Hệ thống lưu trữ ảnh của FacebookTìm kiếm needle trong Haystack: Hệ thống lưu trữ ảnh của Facebook
Tìm kiếm needle trong Haystack: Hệ thống lưu trữ ảnh của Facebook
 
giasan.vn real-estate analytics: a Vietnam case study
giasan.vn real-estate analytics: a Vietnam case studygiasan.vn real-estate analytics: a Vietnam case study
giasan.vn real-estate analytics: a Vietnam case study
 
Giasan.vn @rstars
Giasan.vn @rstarsGiasan.vn @rstars
Giasan.vn @rstars
 
A Vietnamese Language Model Based on Recurrent Neural Network
A Vietnamese Language Model Based on Recurrent Neural NetworkA Vietnamese Language Model Based on Recurrent Neural Network
A Vietnamese Language Model Based on Recurrent Neural Network
 
A Vietnamese Language Model Based on Recurrent Neural Network
A Vietnamese Language Model Based on Recurrent Neural NetworkA Vietnamese Language Model Based on Recurrent Neural Network
A Vietnamese Language Model Based on Recurrent Neural Network
 
Large-Scale Geographically Weighted Regression on Spark
Large-Scale Geographically Weighted Regression on SparkLarge-Scale Geographically Weighted Regression on Spark
Large-Scale Geographically Weighted Regression on Spark
 
Recent progress on distributing deep learning
Recent progress on distributing deep learningRecent progress on distributing deep learning
Recent progress on distributing deep learning
 
success factors for project proposals
success factors for project proposalssuccess factors for project proposals
success factors for project proposals
 
GPSinsights poster
GPSinsights posterGPSinsights poster
GPSinsights poster
 
OCR processing with deep learning: Apply to Vietnamese documents
OCR processing with deep learning: Apply to Vietnamese documents OCR processing with deep learning: Apply to Vietnamese documents
OCR processing with deep learning: Apply to Vietnamese documents
 
Paper@Soict2015: GPSInsights: towards a scalable framework for mining massive...
Paper@Soict2015: GPSInsights: towards a scalable framework for mining massive...Paper@Soict2015: GPSInsights: towards a scalable framework for mining massive...
Paper@Soict2015: GPSInsights: towards a scalable framework for mining massive...
 
Deep learning for nlp
Deep learning for nlpDeep learning for nlp
Deep learning for nlp
 
Introduction to BigData @TCTK2015
Introduction to BigData @TCTK2015Introduction to BigData @TCTK2015
Introduction to BigData @TCTK2015
 
From neural networks to deep learning
From neural networks to deep learningFrom neural networks to deep learning
From neural networks to deep learning
 
From decision trees to random forests
From decision trees to random forestsFrom decision trees to random forests
From decision trees to random forests
 
Recommender systems: Content-based and collaborative filtering
Recommender systems: Content-based and collaborative filteringRecommender systems: Content-based and collaborative filtering
Recommender systems: Content-based and collaborative filtering
 

KĂźrzlich hochgeladen

Conf42-LLM_Adding Generative AI to Real-Time Streaming Pipelines
Conf42-LLM_Adding Generative AI to Real-Time Streaming PipelinesConf42-LLM_Adding Generative AI to Real-Time Streaming Pipelines
Conf42-LLM_Adding Generative AI to Real-Time Streaming PipelinesTimothy Spann
 
GA4 Without Cookies [Measure Camp AMS]
GA4 Without Cookies [Measure Camp AMS]GA4 Without Cookies [Measure Camp AMS]
GA4 Without Cookies [Measure Camp AMS]📊 Markus Baersch
 
MK KOMUNIKASI DATA (TI)komdat komdat.docx
MK KOMUNIKASI DATA (TI)komdat komdat.docxMK KOMUNIKASI DATA (TI)komdat komdat.docx
MK KOMUNIKASI DATA (TI)komdat komdat.docxUnduhUnggah1
 
Data Factory in Microsoft Fabric (MsBIP #82)
Data Factory in Microsoft Fabric (MsBIP #82)Data Factory in Microsoft Fabric (MsBIP #82)
Data Factory in Microsoft Fabric (MsBIP #82)Cathrine Wilhelmsen
 
Predictive Analysis for Loan Default Presentation : Data Analysis Project PPT
Predictive Analysis for Loan Default  Presentation : Data Analysis Project PPTPredictive Analysis for Loan Default  Presentation : Data Analysis Project PPT
Predictive Analysis for Loan Default Presentation : Data Analysis Project PPTBoston Institute of Analytics
 
Thiophen Mechanism khhjjjjjjjhhhhhhhhhhh
Thiophen Mechanism khhjjjjjjjhhhhhhhhhhhThiophen Mechanism khhjjjjjjjhhhhhhhhhhh
Thiophen Mechanism khhjjjjjjjhhhhhhhhhhhYasamin16
 
Heart Disease Classification Report: A Data Analysis Project
Heart Disease Classification Report: A Data Analysis ProjectHeart Disease Classification Report: A Data Analysis Project
Heart Disease Classification Report: A Data Analysis ProjectBoston Institute of Analytics
 
modul pembelajaran robotic Workshop _ by Slidesgo.pptx
modul pembelajaran robotic Workshop _ by Slidesgo.pptxmodul pembelajaran robotic Workshop _ by Slidesgo.pptx
modul pembelajaran robotic Workshop _ by Slidesgo.pptxaleedritatuxx
 
办美国阿肯色大学小石城分校毕业证成绩单pdf电子版制作修改#真实留信入库#永久存档#真实可查#diploma#degree
办美国阿肯色大学小石城分校毕业证成绩单pdf电子版制作修改#真实留信入库#永久存档#真实可查#diploma#degree办美国阿肯色大学小石城分校毕业证成绩单pdf电子版制作修改#真实留信入库#永久存档#真实可查#diploma#degree
办美国阿肯色大学小石城分校毕业证成绩单pdf电子版制作修改#真实留信入库#永久存档#真实可查#diploma#degreeyuu sss
 
Generative AI for Social Good at Open Data Science East 2024
Generative AI for Social Good at Open Data Science East 2024Generative AI for Social Good at Open Data Science East 2024
Generative AI for Social Good at Open Data Science East 2024Colleen Farrelly
 
毕业文凭制作#回国入职#diploma#degree美国加州州立大学北岭分校毕业证成绩单pdf电子版制作修改#毕业文凭制作#回国入职#diploma#de...
毕业文凭制作#回国入职#diploma#degree美国加州州立大学北岭分校毕业证成绩单pdf电子版制作修改#毕业文凭制作#回国入职#diploma#de...毕业文凭制作#回国入职#diploma#degree美国加州州立大学北岭分校毕业证成绩单pdf电子版制作修改#毕业文凭制作#回国入职#diploma#de...
毕业文凭制作#回国入职#diploma#degree美国加州州立大学北岭分校毕业证成绩单pdf电子版制作修改#毕业文凭制作#回国入职#diploma#de...ttt fff
 
Data Analysis Project : Targeting the Right Customers, Presentation on Bank M...
Data Analysis Project : Targeting the Right Customers, Presentation on Bank M...Data Analysis Project : Targeting the Right Customers, Presentation on Bank M...
Data Analysis Project : Targeting the Right Customers, Presentation on Bank M...Boston Institute of Analytics
 
原版1:1定制南十字星大学毕业证(SCU毕业证)#文凭成绩单#真实留信学历认证永久存档
原版1:1定制南十字星大学毕业证(SCU毕业证)#文凭成绩单#真实留信学历认证永久存档原版1:1定制南十字星大学毕业证(SCU毕业证)#文凭成绩单#真实留信学历认证永久存档
原版1:1定制南十字星大学毕业证(SCU毕业证)#文凭成绩单#真实留信学历认证永久存档208367051
 
Business Analytics using Microsoft Excel
Business Analytics using Microsoft ExcelBusiness Analytics using Microsoft Excel
Business Analytics using Microsoft Excelysmaelreyes
 
NO1 Certified Black Magic Specialist Expert Amil baba in Lahore Islamabad Raw...
NO1 Certified Black Magic Specialist Expert Amil baba in Lahore Islamabad Raw...NO1 Certified Black Magic Specialist Expert Amil baba in Lahore Islamabad Raw...
NO1 Certified Black Magic Specialist Expert Amil baba in Lahore Islamabad Raw...Amil Baba Dawood bangali
 
Learn How Data Science Changes Our World
Learn How Data Science Changes Our WorldLearn How Data Science Changes Our World
Learn How Data Science Changes Our WorldEduminds Learning
 
毕业文凭制作#回国入职#diploma#degree澳洲中央昆士兰大学毕业证成绩单pdf电子版制作修改#毕业文凭制作#回国入职#diploma#degree
毕业文凭制作#回国入职#diploma#degree澳洲中央昆士兰大学毕业证成绩单pdf电子版制作修改#毕业文凭制作#回国入职#diploma#degree毕业文凭制作#回国入职#diploma#degree澳洲中央昆士兰大学毕业证成绩单pdf电子版制作修改#毕业文凭制作#回国入职#diploma#degree
毕业文凭制作#回国入职#diploma#degree澳洲中央昆士兰大学毕业证成绩单pdf电子版制作修改#毕业文凭制作#回国入职#diploma#degreeyuu sss
 
办理(Vancouver毕业证书)加拿大温哥华岛大学毕业证成绩单原版一比一
办理(Vancouver毕业证书)加拿大温哥华岛大学毕业证成绩单原版一比一办理(Vancouver毕业证书)加拿大温哥华岛大学毕业证成绩单原版一比一
办理(Vancouver毕业证书)加拿大温哥华岛大学毕业证成绩单原版一比一F La
 
Decoding the Heart: Student Presentation on Heart Attack Prediction with Data...
Decoding the Heart: Student Presentation on Heart Attack Prediction with Data...Decoding the Heart: Student Presentation on Heart Attack Prediction with Data...
Decoding the Heart: Student Presentation on Heart Attack Prediction with Data...Boston Institute of Analytics
 
Top 5 Best Data Analytics Courses In Queens
Top 5 Best Data Analytics Courses In QueensTop 5 Best Data Analytics Courses In Queens
Top 5 Best Data Analytics Courses In Queensdataanalyticsqueen03
 

KĂźrzlich hochgeladen (20)

Conf42-LLM_Adding Generative AI to Real-Time Streaming Pipelines
Conf42-LLM_Adding Generative AI to Real-Time Streaming PipelinesConf42-LLM_Adding Generative AI to Real-Time Streaming Pipelines
Conf42-LLM_Adding Generative AI to Real-Time Streaming Pipelines
 
GA4 Without Cookies [Measure Camp AMS]
GA4 Without Cookies [Measure Camp AMS]GA4 Without Cookies [Measure Camp AMS]
GA4 Without Cookies [Measure Camp AMS]
 
MK KOMUNIKASI DATA (TI)komdat komdat.docx
MK KOMUNIKASI DATA (TI)komdat komdat.docxMK KOMUNIKASI DATA (TI)komdat komdat.docx
MK KOMUNIKASI DATA (TI)komdat komdat.docx
 
Data Factory in Microsoft Fabric (MsBIP #82)
Data Factory in Microsoft Fabric (MsBIP #82)Data Factory in Microsoft Fabric (MsBIP #82)
Data Factory in Microsoft Fabric (MsBIP #82)
 
Predictive Analysis for Loan Default Presentation : Data Analysis Project PPT
Predictive Analysis for Loan Default  Presentation : Data Analysis Project PPTPredictive Analysis for Loan Default  Presentation : Data Analysis Project PPT
Predictive Analysis for Loan Default Presentation : Data Analysis Project PPT
 
Thiophen Mechanism khhjjjjjjjhhhhhhhhhhh
Thiophen Mechanism khhjjjjjjjhhhhhhhhhhhThiophen Mechanism khhjjjjjjjhhhhhhhhhhh
Thiophen Mechanism khhjjjjjjjhhhhhhhhhhh
 
Heart Disease Classification Report: A Data Analysis Project
Heart Disease Classification Report: A Data Analysis ProjectHeart Disease Classification Report: A Data Analysis Project
Heart Disease Classification Report: A Data Analysis Project
 
modul pembelajaran robotic Workshop _ by Slidesgo.pptx
modul pembelajaran robotic Workshop _ by Slidesgo.pptxmodul pembelajaran robotic Workshop _ by Slidesgo.pptx
modul pembelajaran robotic Workshop _ by Slidesgo.pptx
 
办美国阿肯色大学小石城分校毕业证成绩单pdf电子版制作修改#真实留信入库#永久存档#真实可查#diploma#degree
办美国阿肯色大学小石城分校毕业证成绩单pdf电子版制作修改#真实留信入库#永久存档#真实可查#diploma#degree办美国阿肯色大学小石城分校毕业证成绩单pdf电子版制作修改#真实留信入库#永久存档#真实可查#diploma#degree
办美国阿肯色大学小石城分校毕业证成绩单pdf电子版制作修改#真实留信入库#永久存档#真实可查#diploma#degree
 
Generative AI for Social Good at Open Data Science East 2024
Generative AI for Social Good at Open Data Science East 2024Generative AI for Social Good at Open Data Science East 2024
Generative AI for Social Good at Open Data Science East 2024
 
毕业文凭制作#回国入职#diploma#degree美国加州州立大学北岭分校毕业证成绩单pdf电子版制作修改#毕业文凭制作#回国入职#diploma#de...
毕业文凭制作#回国入职#diploma#degree美国加州州立大学北岭分校毕业证成绩单pdf电子版制作修改#毕业文凭制作#回国入职#diploma#de...毕业文凭制作#回国入职#diploma#degree美国加州州立大学北岭分校毕业证成绩单pdf电子版制作修改#毕业文凭制作#回国入职#diploma#de...
毕业文凭制作#回国入职#diploma#degree美国加州州立大学北岭分校毕业证成绩单pdf电子版制作修改#毕业文凭制作#回国入职#diploma#de...
 
Data Analysis Project : Targeting the Right Customers, Presentation on Bank M...
Data Analysis Project : Targeting the Right Customers, Presentation on Bank M...Data Analysis Project : Targeting the Right Customers, Presentation on Bank M...
Data Analysis Project : Targeting the Right Customers, Presentation on Bank M...
 
原版1:1定制南十字星大学毕业证(SCU毕业证)#文凭成绩单#真实留信学历认证永久存档
原版1:1定制南十字星大学毕业证(SCU毕业证)#文凭成绩单#真实留信学历认证永久存档原版1:1定制南十字星大学毕业证(SCU毕业证)#文凭成绩单#真实留信学历认证永久存档
原版1:1定制南十字星大学毕业证(SCU毕业证)#文凭成绩单#真实留信学历认证永久存档
 
Business Analytics using Microsoft Excel
Business Analytics using Microsoft ExcelBusiness Analytics using Microsoft Excel
Business Analytics using Microsoft Excel
 
NO1 Certified Black Magic Specialist Expert Amil baba in Lahore Islamabad Raw...
NO1 Certified Black Magic Specialist Expert Amil baba in Lahore Islamabad Raw...NO1 Certified Black Magic Specialist Expert Amil baba in Lahore Islamabad Raw...
NO1 Certified Black Magic Specialist Expert Amil baba in Lahore Islamabad Raw...
 
Learn How Data Science Changes Our World
Learn How Data Science Changes Our WorldLearn How Data Science Changes Our World
Learn How Data Science Changes Our World
 
毕业文凭制作#回国入职#diploma#degree澳洲中央昆士兰大学毕业证成绩单pdf电子版制作修改#毕业文凭制作#回国入职#diploma#degree
毕业文凭制作#回国入职#diploma#degree澳洲中央昆士兰大学毕业证成绩单pdf电子版制作修改#毕业文凭制作#回国入职#diploma#degree毕业文凭制作#回国入职#diploma#degree澳洲中央昆士兰大学毕业证成绩单pdf电子版制作修改#毕业文凭制作#回国入职#diploma#degree
毕业文凭制作#回国入职#diploma#degree澳洲中央昆士兰大学毕业证成绩单pdf电子版制作修改#毕业文凭制作#回国入职#diploma#degree
 
办理(Vancouver毕业证书)加拿大温哥华岛大学毕业证成绩单原版一比一
办理(Vancouver毕业证书)加拿大温哥华岛大学毕业证成绩单原版一比一办理(Vancouver毕业证书)加拿大温哥华岛大学毕业证成绩单原版一比一
办理(Vancouver毕业证书)加拿大温哥华岛大学毕业证成绩单原版一比一
 
Decoding the Heart: Student Presentation on Heart Attack Prediction with Data...
Decoding the Heart: Student Presentation on Heart Attack Prediction with Data...Decoding the Heart: Student Presentation on Heart Attack Prediction with Data...
Decoding the Heart: Student Presentation on Heart Attack Prediction with Data...
 
Top 5 Best Data Analytics Courses In Queens
Top 5 Best Data Analytics Courses In QueensTop 5 Best Data Analytics Courses In Queens
Top 5 Best Data Analytics Courses In Queens
 

3 - Finding similar items

  • 1. Finding Similar Items: Locality Sensitive Hashing Viet-Trung Tran
  • 2. Credits • Mining of Massive Datasets – Jure Leskovec, Anand Rajaraman, Jeff Ullman Stanford University – http://www.mmds.org 2
  • 3. Scene Completion Problem 3 [Hays and Efros, SIGGRAPH 2007]
  • 4. 10 nearest neighbors from a collection of 20,000 images Scene Completion Problem 4 [Hays and Efros, SIGGRAPH 2007]
  • 5. 10 nearest neighbors from a collection of 2 million images Scene Completion Problem 5 [Hays and Efros, SIGGRAPH 2007]
  • 6. A Common Metaphor • Many problems can be expressed as finding “similar” sets: – Find near-neighbors in high-dimensional space • Examples: – Pages with similar words • For duplicate detection, classification by topic – Customers who purchased similar products • Products with similar customer sets – Images with similar features • Users who visited similar websites 6
  • 7. Problem • Given high dimensional data points: x1,x2,... – E.g. Image is a long vector of pixel colors • And some distance function d(x1,x2) • Goal: find all pairs of data points (xi;xj) that are within some distance threshold d(xi;xj) <= s • Naive solution: O(n2) • Can it to be done in O(n)? 7
  • 8. Relation to Previous Lecture • Last time: Finding frequent pairs 8 Items1…N Items 1…N Count of pair {i,j} in the data NaĂŻve solution: Single pass but requires space quadratic in the number of items Items1…K Items 1…K Count of pair {i,j} in the data A-Priori: First pass: Find frequent singletons For a pair to be a frequent pair candidate, its singletons have to be frequent! Second pass: Count only candidate pairs! N … number of distinct items K … number of items with support  s
  • 9. Relation to Previous Lecture • Last time: Finding frequent pairs • Further improvement: PCY – Pass 1: • Count exact frequency of each item: • Take pairs of items {i,j}, hash them into B buckets and count of the number of pairs that hashed to each bucket: 9 Items 1…N Basket 1: {1,2,3} Pairs: {1,2} {1,3} {2,3} Buckets 1…B 2 1
  • 10. Relation to Previous Lecture • Last time: Finding frequent pairs • Further improvement: PCY – Pass 1: • Count exact frequency of each item: • Take pairs of items {i,j}, hash them into B buckets and count of the number of pairs that hashed to each bucket: – Pass 2: • For a pair {i,j} to be a candidate for a frequent pair, its singletons {i}, {j} have to be frequent and the pair has to hash to a frequent bucket! 10 Items 1…N Basket 1: {1,2,3} Pairs: {1,2} {1,3} {2,3} Basket 2: {1,2,4} Pairs: {1,2} {1,4} {2,4} Buckets 1…B 3 1 2
  • 11. Approaches • A-Priori – Main idea: Candidates – Instead of keeping a count of each pair, only keep a count of candidate pairs! • Find pairs of similar docs – Main idea: Candidates – Pass 1: Take documents and hash them to buckets such that documents that are similar hash to the same bucket – Pass 2: Only compare documents that are candidates (i.e., they hashed to a same bucket) – Benefits: Instead of O(N2) comparisons, we need O(N) comparisons to find similar documents 11
  • 13. Distance Measures  Goal: Find near-neighbors in high-dim. space – We formally define “near neighbors” as points that are a “small distance” apart • For each application, we first need to define what “distance” means • Today: Jaccard distance/similarity – The Jaccard similarity of two sets is the size of their intersection divided by the size of their union: sim(C1, C2) = |C1C2|/|C1C2| – Jaccard distance: d(C1, C2) = 1 - |C1C2|/|C1C2| 13 3 in intersection 8 in union Jaccard similarity= 3/8 Jaccard distance = 5/8
  • 14. Task: Finding Similar Documents • Goal: Given a large number ( in the millions or billions) of documents, find “near duplicate” pairs • Applications: – Mirror websites, or approximate mirrors • Don’t want to show both in search results – Similar news articles at many news sites • Cluster articles by “same story” • Problems: – Many small pieces of one document can appear out of order in another – Too many documents to compare all pairs – Documents are so large or so many that they cannot fit in main memory 14
  • 15. 3 Essential Steps for Similar Docs 1. Shingling: Convert documents to sets 2. Min-Hashing: Convert large sets to short signatures, while preserving similarity 3. Locality-Sensitive Hashing: Focus on pairs of signatures likely to be from similar documents – Candidate pairs! 15
  • 16. The Big Picture 16 Docu- ment The set of strings of length k that appear in the doc- ument Signatures: short integer vectors that represent the sets, and reflect their similarity Locality- Sensitive Hashing Candidate pairs: those pairs of signatures that we need to test for similarity
  • 17. Shingling Step 1: Shingling: Convert documents to sets Docu- ment The set of strings of length k that appear in the doc- ument
  • 18. Documents as High-Dim Data • Step 1: Shingling: Convert documents to sets • Simple approaches: – Document = set of words appearing in document – Document = set of “important” words – Don’t work well for this application. Why? • Need to account for ordering of words! • A different way: Shingles! 18
  • 19. Define: Shingles • A k-shingle (or k-gram) for a document is a sequence of k tokens that appears in the doc – Tokens can be characters, words or something else, depending on the application – Assume tokens = characters for examples • Example: k=2; document D1 = abcab Set of 2-shingles: S(D1) = {ab, bc, ca} – Option: Shingles as a bag (multiset), count ab twice: S’(D1) = {ab, bc, ca, ab} 19
  • 20. Compressing Shingles • To compress long shingles, we can hash them to 32 bits • Represent a document by the set of hash values of its k-shingles – Idea: Two documents could (rarely) appear to have shingles in common, when in fact only the hash-values were shared • Example: k=2; document D1= abcab Set of 2-shingles: S(D1) = {ab, bc, ca} Hash the singles: h(D1) = {1, 5, 7} 20
  • 21. Similarity Metric for Shingles • Document D1 is a set of its k-shingles C1=S(D1) • Equivalently, each document is a 0/1 vector in the space of k-shingles – Each unique shingle is a dimension – Vectors are very sparse • A natural similarity measure is the Jaccard similarity: sim(D1, D2) = |C1C2|/|C1C2| 21
  • 22. Working Assumption • Documents that have lots of shingles in common have similar text, even if the text appears in different order • Caveat: You must pick k large enough, or most documents will have most shingles – k = 5 is OK for short documents – k = 10 is better for long documents 22
  • 23. Motivation for Minhash/LSH • Suppose we need to find near-duplicate documents among 𝑵 = 𝟏 million documents • NaĂŻvely, we would have to compute pairwise Jaccard similarities for every pair of docs – 𝑵(𝑵 − 𝟏)/𝟐 ≈ 5*1011 comparisons – At 105 secs/day and 106 comparisons/sec, it would take 5 days • For 𝑵 = 𝟏𝟎 million, it takes more than a year… 23
  • 24. MinHashing Step 2: Minhashing: Convert large sets to short signatures, while preserving similarity Docu- ment The set of strings of length k that appear in the doc- ument Signatures: short integer vectors that represent the sets, and reflect their similarity
  • 25. Encoding Sets as Bit Vectors • Encode sets using 0/1 (bit, boolean) vectors – One dimension per element in the universal set • Interpret set intersection as bitwise AND, and set union as bitwise OR • Example: C1 = 10111; C2 = 10011 – Size of intersection = 3; size of union = 4, – Jaccard similarity (not distance) = 3/4 – Distance: d(C1,C2) = 1 – (Jaccard similarity) = 1/4 25
  • 26. From Sets to Boolean Matrices • Rows = elements (shingles) • Columns = sets (documents) – 1 in row e and column s if and only if e is a member of s – Column similarity is the Jaccard similarity of the corresponding sets (rows with value 1) – Typical matrix is sparse! • Each document is a column: – Example: sim(C1 ,C2) = ? • Size of intersection = 3; size of union = 6, Jaccard similarity (not distance) = 3/6 • d(C1,C2) = 1 – (Jaccard similarity) = 3/6 26 0101 0111 1001 1000 1010 1011 0111 Documents Shingles
  • 27. Outline: Finding Similar Columns • So far: – Documents  Sets of shingles – Represent sets as boolean vectors in a matrix • Next goal: Find similar columns while computing small signatures – Similarity of columns == similarity of signatures 27
  • 28. Outline: Finding Similar Columns • Next Goal: Find similar columns, Small signatures • NaĂŻve approach: – 1) Signatures of columns: small summaries of columns – 2) Examine pairs of signatures to find similar columns – 3) Optional: Check that columns with similar signatures are really similar • Warnings: – Comparing all pairs may take too much time: Job for LSH • These methods can produce false negatives, and even false positives (if the optional check is not made) 28
  • 29. Hashing Columns (Signatures) • Key idea: “hash” each column C to a small signature h(C), such that: – (1) h(C) is small enough that the signature fits in RAM – (2) sim(C1, C2) is the same as the “similarity” of signatures h(C1) and h(C2) • Goal: Find a hash function h(¡) such that: – If sim(C1,C2) is high, then with high prob. h(C1) = h(C2) – If sim(C1,C2) is low, then with high prob. h(C1) ≠ h(C2) • Hash docs into buckets. Expect that “most” pairs of near duplicate docs hash into the same bucket! 29
  • 30. Min-Hashing • Goal: Find a hash function h(¡) such that: – if sim(C1,C2) is high, then with high prob. h(C1) = h(C2) – if sim(C1,C2) is low, then with high prob. h(C1) ≠ h(C2) • Clearly, the hash function depends on the similarity metric: – Not all similarity metrics have a suitable hash function • There is a suitable hash function for the Jaccard similarity: It is called Min-Hashing 30
  • 31. Min-Hashing • Imagine the rows of the boolean matrix permuted under random permutation  • Define a “hash” function h(C) = the index of the first (in the permuted order ) row in which column C has value 1: h (C) = min (C) • Use several (e.g., 100) independent hash functions (that is, permutations) to create a signature of a column 31
  • 32. Min-Hashing Example 32 3 4 7 2 6 1 5 Signature matrix M 1212 5 7 6 3 1 2 4 1412 4 5 1 6 7 3 2 2121 2nd element of the permutation is the first to map to a 1 4th element of the permutation is the first to map to a 1 0101 0101 1010 1010 1010 1001 0101 Input matrix (Shingles x Documents)Permutation 
  • 33. Similarity for Signatures • We know: Pr[h(C1) = h(C2)] = sim(C1, C2) • Now generalize to multiple hash functions • The similarity of two signatures is the fraction of the hash functions in which they agree • Note: Because of the Min-Hash property, the similarity of columns is the same as the expected similarity of their signatures 33
  • 34. Similarity 34 Similarities: 1-3 2-4 1-2 3-4 Col/Col 0.75 0.75 0 0 Sig/Sig 0.67 1.00 0 0 Signature matrix M 1212 5 7 6 3 1 2 4 1412 4 5 1 6 7 3 2 2121 0101 0101 1010 1010 1010 1001 0101 Input matrix (Shingles x Documents) 3 4 7 2 6 1 5 Permutation 
  • 35. Implementation Trick • Row hashing! – Pick K = 100 hash functions ki – Ordering under ki gives a random row permutation! • One-pass implementation – For each column C and hash-func. ki keep a “slot” for the min-hash value – Initialize all sig(C)[i] =  – Scan rows looking for 1s • Suppose row j has 1 in column C • Then for each ki : – If ki(j) < sig(C)[i], then sig(C)[i]  ki(j) 35 How to pick a random hash function h(x)? Universal hashing: ha,b(x)=((a¡x+b) mod p) mod N where: a,b … random integers p … prime number (p > N)
  • 36. Locality Sensitive Hashing Step 3: Locality-Sensitive Hashing: Focus on pairs of signatures likely to be from similar documents Docu- ment The set of strings of length k that appear in the doc- ument Signatures: short integer vectors that represent the sets, and reflect their similarity Locality- Sensitive Hashing Candidate pairs: those pairs of signatures that we need to test for similarity
  • 37. LSH: First Cut • Goal: Find documents with Jaccard similarity at least s (for some similarity threshold, e.g., s=0.8) • LSH – General idea: Use a function f(x,y) that tells whether x and y is a candidate pair: a pair of elements whose similarity must be evaluated • For Min-Hash matrices: – Hash columns of signature matrix M to many buckets – Each pair of documents that hashes into the same bucket is a candidate pair 37 1212 1412 2121
  • 38. Partition M into b Bands 38 Signature matrix M r rows per band b bands One signature 1212 1412 2121
  • 39. Partition M into Bands • Divide matrix M into b bands of r rows • For each band, hash its portion of each column to a hash table with k buckets – Make k as large as possible • Candidate column pairs are those that hash to the same bucket for ≥ 1 band • Tune b and r to catch most similar pairs, but few non-similar pairs 39
  • 40. Matrix M r rows b bands Buckets Columns 2 and 6 are probably identical (candidate pair) Columns 6 and 7 are surely different. Hashing Bands 40
  • 41. Simplifying Assumption • There are enough buckets that columns are unlikely to hash to the same bucket unless they are identical in a particular band • Hereafter, we assume that “same bucket” means “identical in that band” • Assumption needed only to simplify analysis, not for correctness of algorithm 41
  • 42. Example of Bands Assume the following case: • Suppose 100,000 columns of M (100k docs) • Signatures of 100 integers (rows) • Therefore, signatures take 40Mb • Choose b = 20 bands of r = 5 integers/band • Goal: Find pairs of documents that are at least s = 0.8 similar 42 1212 1412 2121
  • 43. C1, C2 are 80% Similar • Find pairs of  s=0.8 similarity, set b=20, r=5 • Assume: sim(C1, C2) = 0.8 – Since sim(C1, C2)  s, we want C1, C2 to be a candidate pair: We want them to hash to at least 1 common bucket (at least one band is identical) • Probability C1, C2 identical in one particular band: (0.8)5 = 0.328 • Probability C1, C2 are not similar in all of the 20 bands: (1-0.328)20 = 0.00035 – i.e., about 1/3000th of the 80%-similar column pairs are false negatives (we miss them) – We would find 99.965% pairs of truly similar documents 43 1212 1412 2121
  • 44. C1, C2 are 30% Similar • Find pairs of  s=0.8 similarity, set b=20, r=5 • Assume: sim(C1, C2) = 0.3 – Since sim(C1, C2) < s we want C1, C2 to hash to NO common buckets (all bands should be different) • Probability C1, C2 identical in one particular band: (0.3)5 = 0.00243 • Probability C1, C2 identical in at least 1 of 20 bands: 1 - (1 - 0.00243)20 = 0.0474 – In other words, approximately 4.74% pairs of docs with similarity 0.3% end up becoming candidate pairs • They are false positives since we will have to examine them (they are candidate pairs) but then it will turn out their similarity is below threshold s 44 1212 1412 2121
  • 45. LSH Involves a Tradeoff • Pick: – The number of Min-Hashes (rows of M) – The number of bands b, and – The number of rows r per band to balance false positives/negatives • Example: If we had only 15 bands of 5 rows, the number of false positives would go down, but the number of false negatives would go up 45 1212 1412 2121
  • 46. What b Bands of r Rows Gives You 46 t r All rows of a band are equal 1 - Some row of a band unequal ( )b No bands identical 1 - At least one band identical s ~ (1/b)1/r Similarity t=sim(C1, C2) of two sets Probability of sharing a bucket
  • 47. Example: b = 20; r = 5 • Similarity threshold s • Prob. that at least 1 band is identical: 47 s 1-(1-sr)b .2 .006 .3 .047 .4 .186 .5 .470 .6 .802 .7 .975 .8 .9996
  • 48. LSH Summary • Tune M, b, r to get almost all pairs with similar signatures, but eliminate most pairs that do not have similar signatures • Check in main memory that candidate pairs really do have similar signatures • Optional: In another pass through data, check that the remaining candidate pairs really represent similar documents 48
  • 49. Summary: 3 Steps • Shingling: Convert documents to sets – We used hashing to assign each shingle an ID • Min-Hashing: Convert large sets to short signatures, while preserving similarity – We used similarity preserving hashing to generate signatures with property Pr[h(C1) = h(C2)] = sim(C1, C2) – We used hashing to get around generating random permutations • Locality-Sensitive Hashing: Focus on pairs of signatures likely to be from similar documents – We used hashing to find candidate pairs of similarity  s 49

Hinweis der Redaktion

  1. 10 nearest neighbors, 20,000 image database
  2. 10 nearest neighbors, 20,000 image database
  3. 10 nearest neighbors, 2.3 million image database
  4. Each agress with prob s. So to estimate s we compute what fraction of hash functions agree
  5. false negative : a false negative is an error in which a test result improperly indicates no presence of a condition (the result is negative), when in reality it is present.