Web.science.mq.edu.au

Natural Language Engineering 1 (1): 000–000.
Back at the beginning of 2000, I was a member of a working group tasked to comeup with some guidelines for revamping my University’s website. During one of ourmeetings, someone made the suggestion that decisions about how to structure andpresent information on the website should be driven by the kinds of questions thatusers come to the site with. Suddenly a light went on, and there appeared an ideafor data gathering that might provide us with some useful information. To find outwhat people really wanted to know when they visited the website, we would replacethe University’s search engine by a page that invited the user to type in his or herquery as a full natural language question. Appropriately chosen examples would begiven to demonstrate that using real questions delivered better pages as a result.
The data gathered would tell us what people were really looking for, more thancould be gleaned from conventional search queries, and would therefore help us tobetter structure the information available on the website.
Thanks to a very cooperative University webmaster and a supportive administra- tion, our JustAsk! web page was up and running within weeks. Of course, we didn’tactually implement a question-answering system. Instead, behind the page was ascript which, given any submitted query, would strip out any stop words it contained(after logging the full query, of course), and then simply pass the remaining wordsin the query on to the same search engine that had been used previously. Withina few days of the deception being put in place, a number of people independentlycommented on how much better the University’s search engine was, now that wehad added a natural language capability. I came clean and denied all responsibilityfor any perceived improvements, of course; but it’s entirely possible that users didget better results simply because, even after stripping out stop words, their queriestended to become longer, and contained more content words, when expressed asquestions.
Every Sunday at midnight since then, I have been sent a log file containing the previous week’s queries. The web page in question has now moved and become 0 Industry Watch is a semi-regular column that looks at commercial applications of nat- ural language technology. The author can be contacted at [email protected].
inaccessible, at least to humans: these days, the logs I receive contain nothing butthe URLs of porn sites, online casinos, and Viagra ads. But over the period from2000 to 2005, before the spambots kicked into action, we accumulated around twomillion queries. That’s not a lot for a search engine, but still enough, one wouldthink, to be instructive about user interests in a limited domain.
Now, there are all sorts of methodological flaws with this experiment, but the single most important result of analysing the data is hard to deny: despite ourbest attempts at exhorting the user to provide full natural language questions, onlyaround 8% of the queries submitted via the page were in fact questions. The habitsof googling 2.4 words at a time are well-entrenched and hard to break. It has alwaysseemed to me that this is a significant problem that any attempts to commercialisenatural language question answering as the next step beyond conventional websearch must somehow address.
Which is not to say that many have not tried. Most people will be familiar with Ask Jeeves, for example (now simply www.ask.com), which burst onto the scenein 1998. In an excellent example of web democracy at work, popular questionswere rewarded with the attention of human editors to ensure that they returnedquality answers; if you had a minority interest, you’d be just as well to use a con-ventional search engine. But as the underlying technical issues to be faced in realautomatic question answering became better understood, Ask Jeeves was followedby a slew of question-answering systems that really did attempt to take the hu-man out of the loop. Many of these no longer exist, or have morphed into otherproducts. The Electric Monk (1998) claimed to find answers to real English ques-tions by reformulating questions to produce Alta Vista queries. Albert (2000) wasbranded as a multilingual, natural-language-capable, intelligent search engine, sub-sequently purchased by FAST, passed on to Overture and then to Yahoo!. iPhrase(2000) appeared for a time as a natural language search and navigation solution atYahoo! Finance, and was bought by IBM in 2005. All of these have more or lessdisappeared from the face of the web, but some more recent offerings that attemptto utilise some kind of natural language processing are still around. BrainBoost(2003; wwww.brainboost.com), like the Electric Monk, finds answers to naturallanguage questions by translating them into multiple search engine queries andthen extracting relevant answers from the returned pages. BrainBoost still exists,but is now owned by Answers.com (www.answers.com). MeaningMaster, created byKathleen Dahlgren, appeared in 2004, claiming that it ‘delivers results three timesmore accurate than Google’, using a lexicon that provides contextual meaningsfor 350,000 words; MeaningMaster has since been rebranded as CognitionSearch(www.cognition.com).
There are no doubt many other attempts at natural language search. But the point is this: none of these have brought QA onto the mainstream internet user’shorizon. As noted in this column a while back, even Google’s own limited questionanswering capability remains effectively unadvertised (and in fact seems to havelost functionality it once had).
But maybe this is all set to change. If you hang out at Buck’s Restaurant in Wood- side in the Valley,1 you will already have heard of Powerset (www.powerset.com).
But if eavesdropping on venture capitalists over pancakes is not your thing, oryou’re not otherwise part of a local in-the-know circle, it’s more than likely thatPowerset has been below your radar. As they indicate on their web site, they havebeen in semi-stealth mode for over a year, only recently being outed to the generalpublic by a story in the UK’s Sunday Times2 and the announcement of a US$12.5funding deal.3 Powerset, it appears, is the company that will knock Google off itspedestal, by bringing real question answering to the great unwashed web. BarneyPell, the company’s founder, talks up the possibiilities on his blog:4 ‘Search is inits early days, and natural language is the future of search.’ Great stuff. All of Pell’s arguments for QA as an improvement on search will be familiar to readers of this journal. A widely-used and effective natural-languagebased question answering interface to the web would do wonders for the visibilityof natural language processing as a research field. But, given the lacklustre ap-peal of earlier attempts, why should we expect Powerset to do any better? DannySullivan’s rant against question answering5 presents the sceptics’ position. Sullivanshares the concern I indicated above about getting people to stop typing 2.4-wordqueries: ‘People search however they want—and right now, they use only a fewwords . . . Getting inside the minds and whispering ‘type longer’ isn’t going to befun.’ Google’s response to challenges from companies like Powerset was summedup by Peter Norvig, Director of Research, thus: ‘They have maybe one small leverthat they suspect is huge. They don’t realise that [all] they have [is] a better doorlatch on a [Boeing] 747. Now all they have to do is build a 747.’ Nonetheless, Pow-erset is backed by some big names—people who you’d assume have a good sense ofjudgement—so maybe this time it will be different.
Powerset has been getting the media attention, but over on the East Coast of the US there is QA activity too. Hakia claims to be building the Web’s new ‘meaning-based’ search engine; check out the beta version at http://www.hakia.com. Hakia’sRiza Berkan may not have Esther Dyson and an army of angel investors from thesearch world on board, but he is being advised by Yorick Wilks. Also on the EastCoast, Teragram is quietly plugging away at improving search by adding linguisticsensitivity. The company’s Direct Answers product, with an enterprise version an-nounced in June, was chosen by KMWorld Magazine as the Trend-Setting Productof 2006.6 In use at AOL, this technology delivers short, specific answers by ‘grasp-ing the essential question’. Teragram has also been named to the 2006 EContent 1 www.valleywag.com/tech/buck’s/wanted-bucks-restaurant-vc-spotter-175471.php.
2 See www.timesonline.co.uk/article/0,,2095-2459650.html.
3 Remember, this is being written some three months before you would have had any chance to read it. All sorts of things could have happened in the interim. Powerset mayalready be your home page.
http://blog.searchenginewatch.com/blog/061005-095006.
6 See http://public-issues.com/2006/12/teragrams-direct-answers-chosen-by.html.
100, the premier list of ‘100 Companies That Matter Most in the Digital ContentIndustry’ as judged by EContent Magazine.7 Microsoft looks to be gearing up for QA too. The company has bought Colloquis (www.colloquis.com), whose offering claims to use natural language processing togive companies a way to do automated customer-service online without the needfor a human customer-service agent. You can now buy the Colloquis technology asa hosted sevice, rebranded Windows Live Service Agents.
Which almost brings us back to where we started with Ask Jeeves: per- haps inspired by a deviant reparsing of ‘Windows Live Service Agents’, ChaCha(www.chacha.com) is a ‘human-powered search engine’ that uses an army of hu-mans to provide answers to questions: ‘ChaCha only provides quality, human ap-proved results’. I typed into ChaCha: ’Is 2007 the year for question answering?’,and BrettaF, my guide, went off looking for an answer, only to pass me a minute ortwo later to jamieT, ’another guide who can help you search even better’; jamieT inturn passed me on to KarenB, ’another guide who can help you search even better’. . . . But I’m in an airport lounge with my plane about to depart, so I reverted tofeeling lucky with Google. I got a job ad for a position in Ruslan Mitkov’s group,but I was no closer to an answer. Maybe by the time you read this, things will bedifferent.
If 2.4-word queries are the major roadblock to question answering on the web, then the analogous challenge for voice recognition on mobiles must be thewidespread use of SMS. It’s pretty hard to beat those thumbs, especially whenthey are on the hands of a teenager. But that may be set to change too: Nuance’smobile speech platform now provides a ‘dictate-anywhere’ function on the Win-dows Mobile and Symbian operating systems. To showcase the technology, Nuancechallenged Ben Cook, the world champion texter, to compete with its software todetermine what would be the fastest and most accurate way to send text messages.
Cook holds the record for the fastest entry of a 160-character message on a mobiledevice, at 42.22 seconds. In the bake-off with Nuance, he finished in 48 seconds;using the Nuance software, it took 16.32 seconds to compose the message. Watchit on YouTube at http://www.youtube.com/watch?v=-L4Jk6GDud0.
So it looks like the peaceful, SMS-induced, silence on public transport over the last two or three years was only temporary. I see a bleak future where everyone sendstext messages by talking, just like they used to a few years back, into their phonesin public places. Just one more excuse to buy those neat Bose noise-cancellingheadphones for your iPod.

Source: http://web.science.mq.edu.au/~rdale/publications/industrywatch/2007-V13-1.pdf

Bas

BioChain NAD+/NADH V002 NAD+/NADH Assay Kit (Z5030037) Ultrasens itive Colorimetric Determination of NAD+/NADH at 565 nm DESCRIPTION Transfer 40 mL standards into w el s of a clear flat-bottom 96-w el Pyr idine nuc leotides play an important role in metabolis m and, thus, ther e is continual interest in monitoring their concentration Samples . Add 40 mL sample per w el in se

Rate and predictors of self-chosen drug discontinuations in highly active antiretroviral therapy-treated hiv-positive individuals

AIDS PATIENT CARE and STDsVolume 23, Number 1, 2009 ª Mary Ann Liebert, Inc. DOI: 10.1089=apc.2007.0248Rate and Predictors of Self-Chosen Drug Discontinuationsin Highly Active Antiretroviral Therapy-TreatedRita Murri, M.D.,1 Giovanni Guaraldi, M.D.,2 Piergiorgio Lupoli, Ph.D.,3 Raffaella Crisafulli, Ph.D.,3Simone Marcotullio, Ph.D.,4 Filippo von Schloesser,4 and Albert W. Wu, Ph.D.5Despite

Copyright © 2018 Medical Abstracts