免费注册 查看新帖 |

Chinaunix

  平台 论坛 博客 文库
最近访问板块 发新帖
查看: 2262 | 回复: 0
打印 上一主题 下一主题

Web harvesting [复制链接]

论坛徽章:
0
跳转到指定楼层
1 [收藏(0)] [报告]
发表于 2008-05-16 11:27 |只看该作者 |倒序浏览
  It's hard to  argue with the proposition that the World Wide Web is the largest repository  of information that has ever existed. In just over a decade, the Web has  moved from a university curiosity to a fundamental research, marketing and  communications vehicle that impinges upon the everyday life of most people in  the developed world. But there's a catch, of course. As the amount of  information on the Web grows, that information becomes ever harder to keep  track of and use.
  This vast  amount of freely available information is spread over billions of Web pages,  each with its own independent structure and format. So how do you find the  information you're looking for in a useful format -- and do it quickly and  easily without breaking the bank?
  Search Isn't Enough
  Search  engines are a big help, but they can do only part of the work, and they are  hard-pressed to keep up with daily changes. For all the power of Google and  its kin, all that search engines can do is locate information and point to  it. They go only two or three levels deep into a Web site to find information  and then return URLs. They also find and return meta descriptions and meta  keywords embedded in Web pages, but these may well be inaccurate.
  Consider that  even when you use a search engine to locate data, you still have to do the  following tasks to capture the information you need:
  - Scan the content until you find the information.
  - Mark the information (usually by highlighting with a mouse).
  - Copy the information.
  - Switch to another application (such as a spreadsheet, database or word  processor).
  - Paste the information into that application.

  A better  solution, especially for companies that are aiming to exploit a broad swath  of data about markets or competitors, lies with Web harvesting tools.
  Web  harvesting software automatically extracts information from the Web and picks  up where search engines leave off, doing the work the search engine can't.  Extraction tools automate the reading, copying and pasting necessary to  collect information for analysis, and they have proved useful for pulling  together information on competitors, prices and financial data of all types.
  Harvesting Techniques
  There are  three ways we can extract more useful information from the Web.
  The first  technique, Web content harvesting, is concerned directly with the specific  content of documents or their descriptions, such as HTML files, images or  e-mail messages. Since most text documents are relatively unstructured (at  least as far as machine interpretation is concerned), one common approach is  to exploit what's already known about the general structure of documents and  map this to some data model.
  Another  approach to Web content harvesting involves trying to improve on the content  searches that tools like search engines perform. This type of content  harvesting goes beyond keyword extraction and the production of simple  statistics relating to words and phrases in documents.
  Another  technique, Web structure harvesting, takes advantage of the fact that Web  pages can reveal more information than just their obvious content. Links from  other sources that point to a particular Web page indicate the popularity of  that page, while links within a Web page that point to other resources may  indicate the richness or variety of topics covered in that page. This is like  analyzing bibliographical citations -- a paper that's often cited in  bibliographies and other papers is usually considered to be important.
  The third  technique, Web usage harvesting, uses data recorded by Web servers about user  interactions to help understand user behavior and evaluate the effectiveness  of the Web structure.
  General  access-pattern tracking analyzes Web logs to understand access patterns and  trends in order to identify structural issues and resource groupings.
  Customized  usage tracking analyzes individual trends so that Web sites can be personalized  to specific users. Over time, based on access patterns, a site can be  dynamically customized for a user in terms of the information displayed, the  depth of the site structure and the format of the resources presented.
  Also Known As . . .
  Over the past  decade, the terminology used to describe Web harvesting has undergone several  changes. In 1996, researcher Oren Etzioni wrote a paper called "The  World Wide Web: Quagmire or Gold Mine?" which was published in the  journal Communications of the ACM. Etzioni defined Web mining as the use of  data mining techniques to automatically discover and extract information from  Web documents and services.
  In the late  1990s, Richard Hackathorn coined the term Web farming to describe a  discipline combining aspects of data warehousing, Web data mining and  knowledge-base creation.
  Around the  turn of the millennium, Web harvesting began to replace Web mining as the  fashionable buzzphrase, although it can mean different things to different  people. Web harvesting can be synonymous with Web mining, Web farming and Web  scraping, but it can have other meanings as well. One widespread usage of the  term refers specifically to the searching of Web pages for e-mail addresses  for resale and use in commercial solicitations (i.e. spam).
  The Web site  of the Medical University of South Carolina defines Web harvesting as  "the process of downloading RSS feeds and consolidating them for  display."
  Another  related term is Web scraping, an obvious derivation from the 1980s  catchphrase "screen scraping," where PC- or mini-based applications  accessing mainframe systems emulated 3270 or VT100 terminals. Such  applications were quick and cheap but not always reliable. Similarly, Web  scraping applications process a Web page's HTML to extract meaningful data,  often from live data feeds or by manipulating specific applications. Web  scrapers are also cheap and useful but of questionable reliability.
  Kay is a Computerworld contributing writer in Worcester, Mass.  Contact him at russkay@charter.net.
  
  
   
VARIETIES    OF WEB HARVESTING
   
   
      WEB HARVESTING covers three main techniques      for gathering information, with several subcategories of functionality. file:///C:/DOCUME%7E1/new/LOCALS%7E1/Temp/msohtml1/01/clip_image001.gif
      


   
   
  
  
  

For more information,  please visit our website: http://www.knowlesys.com
  
您需要登录后才可以回帖 登录 | 注册

本版积分规则 发表回复

  

北京盛拓优讯信息技术有限公司. 版权所有 京ICP备16024965号-6 北京市公安局海淀分局网监中心备案编号:11010802020122 niuxiaotong@pcpop.com 17352615567
未成年举报专区
中国互联网协会会员  联系我们:huangweiwei@itpub.net
感谢所有关心和支持过ChinaUnix的朋友们 转载本站内容请注明原作者名及出处

清除 Cookies - ChinaUnix - Archiver - WAP - TOP