Microsoft powerpoint - lecture9.ppt

Introduction to Machine Learning and Data Mining Prof. Dr. Igor Trajkovski, NYUS, Spring 2008 – Recommending– Google / Yahoo like directory classification – Recommending– spam filtering– Sentiment analysis for marketing – classification– Personalized newspaper – Prioritizing – Folderizing– spam filtering– Advertising on Gmail Introduction to Machine Learning and Data Mining Prof. Dr. Igor Trajkovski, NYUS, Spring 2008 Representations of text are very high dimensional Vectors are sparse since most words are rare.
– Zipf’s law and heavy-tailed distributions The most frequent word will occur approximately twice as often as the second most frequent word, which occurs twice as often as the fourth most frequent word, etc.
High-bias algorithms that prevent overfitting in high-dimensional space are best.
– SVMs maximize margin to avoid over-fitting in hi-D For most text categorization tasks, there are many irrelevant and many relevant features.
Methods that sum evidence from many or all features (e.g. naive Bayes, KNN, neural-net, SVM) tend to work better than ones that try to isolate just a few relevant features (decision-tree or rule induction).
Introduction to Machine Learning and Data Mining Prof. Dr. Igor Trajkovski, NYUS, Spring 2008 • Modeled as generating a bag of words for a document in a given category by repeatedly sampling with replacement from a vocabulary V = {w , w ,…w } based on the • Smooth probability estimates with Laplace m-estimates assuming a uniform distribution over all words (p = 1/|V|) and m = |V| – Equivalent as seeing each word in each category exactly once.
Introduction to Machine Learning and Data Mining Prof. Dr. Igor Trajkovski, NYUS, Spring 2008 Introduction to Machine Learning and Data Mining Prof. Dr. Igor Trajkovski, NYUS, Spring 2008 Introduction to Machine Learning and Data Mining Prof. Dr. Igor Trajkovski, NYUS, Spring 2008 Let V be the vocabulary of all words in the documents in DFor each category c ∈ C Let D be the subset of documents in D in category c Let T be the concatenation of all the documents in D Let n be the total number of word occurrences in T Let n be the number of occurrences of w in T Introduction to Machine Learning and Data Mining Prof. Dr. Igor Trajkovski, NYUS, Spring 2008 Given a test document XLet n be the number of word occurrences in XReturn the category: where a is the word occurring the ith position in X Introduction to Machine Learning and Data Mining Prof. Dr. Igor Trajkovski, NYUS, Spring 2008 • Multiplying lots of probabilities, which are between 0 and 1 by definition, can result in floating-point underflow.
• Since log(xy) = log(x) + log(y), it is better to perform all computations by summing logs of probabilities rather than multiplying probabilities.
• Class with highest final un-normalized log probability score is still the most probable.
Introduction to Machine Learning and Data Mining Prof. Dr. Igor Trajkovski, NYUS, Spring 2008 • Classification results of naive Bayes (the class with maximum posterior probability) are usually fairly accurate.
• However, due to the inadequacy of the conditional independence assumption, the actual posterior-probability numerical estimates are not.
– Output probabilities are generally very close to 0 or 1.
Introduction to Machine Learning and Data Mining Prof. Dr. Igor Trajkovski, NYUS, Spring 2008 • Measuring similarity of two texts is a well-studied problem.
• Standard metrics are based on a “bag of words” model of a document that ignores word order and syntactic structure.
• May involve removing common “stop words” and stemming • Vector-space model from Information Retrieval (IR) is the • Other metrics (e.g. edit-distance) are also used.
Introduction to Machine Learning and Data Mining Prof. Dr. Igor Trajkovski, NYUS, Spring 2008 • Assume t distinct terms remain after preprocessing; call them index terms or the vocabulary.
• These “orthogonal” terms form a vector space.
• Each term, i, in a document or query, j, is given a • Both documents and queries are expressed as t-dimensional Introduction to Machine Learning and Data Mining Prof. Dr. Igor Trajkovski, NYUS, Spring 2008 Introduction to Machine Learning and Data Mining Prof. Dr. Igor Trajkovski, NYUS, Spring 2008 • A collection of n documents can be represented in the vector space model by a term-document matrix.
• An entry in the matrix corresponds to the “weight” of a term in the document; zero means the term has no significance in the document or it simply doesn’t exist in the document.
Introduction to Machine Learning and Data Mining Prof. Dr. Igor Trajkovski, NYUS, Spring 2008 • More frequent terms in a document are more important, • May want to normalize term frequency (tf) by dividing by the frequency of the most common term in the document: Introduction to Machine Learning and Data Mining Prof. Dr. Igor Trajkovski, NYUS, Spring 2008 • Terms that appear in many different documents are less idf = inverse document frequency of term i, • An indication of a term’s discrimination power.
• Log used to dampen the effect relative to tf.
Introduction to Machine Learning and Data Mining Prof. Dr. Igor Trajkovski, NYUS, Spring 2008 • A typical combined term importance indicator is tf-idf • A term occurring frequently in the document but rarely in the rest of the collection is given high weight.
• Many other ways of determining term weights have been • Experimentally, tf-idf has been found to work well.
Introduction to Machine Learning and Data Mining Prof. Dr. Igor Trajkovski, NYUS, Spring 2008 Cosine similarity measures the cosine of the Inner product normalized by the vector lengths.
CosSim(D , Q) = 10 / √(4+9+25)(0+0+4) = 0.81 CosSim(D , Q) = 2 / √(9+49+1)(0+0+4) = 0.13 D is 6 times better than D using cosine similarity but 5 times better using inner product.
Introduction to Machine Learning and Data Mining Prof. Dr. Igor Trajkovski, NYUS, Spring 2008 • Relevance feedback methods can be adapted for text • Use standard TF/IDF weighted vectors to represent text documents (normalized by maximum term frequency).
• For each category, compute a prototype vector by summing the vectors of the training documents in the category.
• Assign test documents to the category with the closest prototype vector based on cosine similarity.
Introduction to Machine Learning and Data Mining Prof. Dr. Igor Trajkovski, NYUS, Spring 2008 Introduction to Machine Learning and Data Mining Prof. Dr. Igor Trajkovski, NYUS, Spring 2008 Assume the set of categories is {c , c ,…c } For i from 1 to n let p = <0, 0,…,0> (init. prototype vectors) For each training example <x, c(x)> ∈ D Let d be the frequency normalized TF/IDF term vector for doc xLet p Introduction to Machine Learning and Data Mining Prof. Dr. Igor Trajkovski, NYUS, Spring 2008 Given test document xLet d be the TF/IDF weighted term vector for xLet m = –2 (init. maximum cosSim)For i from 1 to n: (compute similarity to prototype vector)Let s = cosSim(d, p )iif s > m let m = slet r = c (update most similar class prototype) Introduction to Machine Learning and Data Mining Prof. Dr. Igor Trajkovski, NYUS, Spring 2008 • Does not guarantee a consistent hypothesis.
• Forms a simple generalization of the examples in • Prototype vector does not need to be averaged or otherwise normalized for length since cosine similarity is insensitive to vector length.
• Classification is based on similarity to class Introduction to Machine Learning and Data Mining Prof. Dr. Igor Trajkovski, NYUS, Spring 2008 Introduction to Machine Learning and Data Mining Prof. Dr. Igor Trajkovski, NYUS, Spring 2008 Illustration of 3 Nearest Neighbor for Text Introduction to Machine Learning and Data Mining Prof. Dr. Igor Trajkovski, NYUS, Spring 2008 Training:For each each training example <x, c(x)> ∈ D Compute the corresponding TF-IDF vector, d , for document x Test instance y:Compute TF-IDF vector d for document yFor each <x, c(x)> ∈ D Sort examples, x, in D by decreasing value of sxLet N be the first k examples in D. (get most similar neighbors)Return the majority class of examples in N Introduction to Machine Learning and Data Mining Prof. Dr. Igor Trajkovski, NYUS, Spring 2008 Introduction to Machine Learning and Data Mining Prof. Dr. Igor Trajkovski, NYUS, Spring 2008 • Linear search through training texts is not scalable.
• An index that points from words to documents that contain them allows more rapid retrieval of similar documents.
• Once stop-words are eliminated, the remaining words are rare, so an inverted index narrows attention to a relatively small number of documents that share meaningful vocabulary with the test document.
Introduction to Machine Learning and Data Mining Prof. Dr. Igor Trajkovski, NYUS, Spring 2008 • Many important applications of classification to text.
• Requires an approach that works well with large, sparse features vectors, since typically each word is a feature and most words are rare.
– Naive Bayes– kNN with cosine similarity– SVMs Introduction to Machine Learning and Data Mining Prof. Dr. Igor Trajkovski, NYUS, Spring 2008

Source: http://www.time.mk/trajkovski/teaching/aim/Lecture9.pdf

agriexchange.apeda.gov.in

THIS REPORT CONTAINS ASSESSMENTS OF COMMODITY AND TRADE ISSUES MADE BY USDA STAFF AND NOT NECESSARILY STATEMENTS OF OFFICIAL U.S. GOVERNMENT POLICY Date: 3/28/2012 GAIN Report Number: KS1224 Korea - Republic of Post: Seoul Residue Detection Regulations for Imported Livestock Products Report Categories: Approved By: Prepared By: Report Highlights: The Quar

Microsoft word - akah et al.doc

Journal of Medicine and Medical Sciences Vol. 1(10) pp. 447-452 November 2010 Available online http://www.interesjournals.org/JMMS Copyright ©2010 International Research Journals Prevalence and treatment outcome of vulvovaginal candidiasis in pregnancy in a rural community in Enugu State, Nigeria P. A. Akah1, C. E. Nnamani1,2 and P.O. Nnamani3* 1Department of Pharmacology and Tox

Copyright © 2018 Medical Abstracts