nutch爬虫系统分析

上传人:痛*** 文档编号:201470243 上传时间:2023-04-19 格式:DOC 页数:59 大小:451KB
收藏 版权申诉 举报 下载
nutch爬虫系统分析_第1页
第1页 / 共59页
nutch爬虫系统分析_第2页
第2页 / 共59页
nutch爬虫系统分析_第3页
第3页 / 共59页
资源描述:

《nutch爬虫系统分析》由会员分享,可在线阅读,更多相关《nutch爬虫系统分析(59页珍藏版)》请在装配图网上搜索。

1、nutch爬虫系统分析Nutch分析1Nutch简介21.1nutch体系结构22抓取部分32.1爬虫的数据结构及含义32.2抓取目录分析42.3抓取过程概述42.4抓取过程分析52.4.1inject方法62.4.2generate方法122.4.3fetch 方法142.4.4parse方法162.4.5update方法162.4.6invert方法192.4.7index方法232.4.8dedup方法262.4.9merge方法303配置文件分析313.1nutch-default.xml分析313.1.1313.1.2323.1.3353.1.4373.1.5413.1.6423.1

2、.7433.1.8453.1.9453.1.10453.1.11483.1.12483.1.13493.1.14493.1.15513.1.16523.1.17523.1.18533.1.19533.1.20543.1.21553.1.22553.1.23553.1.24563.1.25563.2regex-urlfilter.txt解析583.3regex-normalize.xml解析583.4总结594参考资源591 Nutch简介1.1 nutch体系结构2 抓取部分2.1 爬虫的数据结构及含义爬虫系统是由Nutch的爬虫工具驱动的。并且把构建和维护一些数据结构类型同一系列工具关联起来

3、:包括web database、一系列的segment和index。接下来我们将详细描述他们。三者的物理文件分别存储在爬行结果目录下的crawldb文件夹内,segments文件夹和index文件夹内。那么三者分别存储的信息是什么呢?Web database,也叫WebDB,其中存储的是爬虫所抓取网页之间的链接结构信息,它只在爬虫Crawler工作中使用而和Searcher的工作没有任何关系。WebDB内存储了两种实体的信息:page和link。Page实体通过描述网络上一个网页的特征信息来表征一个实际的网页,因为网页有很多个需要描述,WebDB中通过网页的URL和网页内容的MD5两种索引方法

4、对这些网页实体进行了索引。Page实体描述的网页特征主要包括网页内的 link数目,抓取此网页的时间等相关抓取信息,对此网页的重要度评分等。同样的,Link实体描述的是两个page实体之间的链接关系。WebDB构成了一个所抓取网页的链接结构图,这个图中Page实体是图的结点,而Link实体则代表图的边。一次爬行会产生很多个segment,每个segment内存储的是爬虫Crawler在单独一次抓取循环中抓到的网页以及这些网页的索引。 Crawler爬行时会根据WebDB中的link关系按照一定的爬行策略生成每次抓取循环所需的fetchlist,然后Fetcher通过 fetchlist中的UR

5、Ls抓取这些网页并索引,然后将其存入segment。Segment是有时限的,当这些网页被Crawler重新抓取后,先前抓取产生的segment就作废了。在存储中。Segment文件夹是以产生时间命名的,方便我们删除作废的segments以节省存储空间。Index是Crawler抓取的所有网页的索引,它是通过对所有单个segment中的索引进行合并处理所得的。Nutch利用Lucene技术进行索引,所以Lucene中对索引进行操作的接口对Nutch中的index同样有效。但是需要注意的是,Lucene中的segment和Nutch 中的不同,Lucene中的segment是索引index的一部

6、分,但是Nutch中的segment只是WebDB中各个部分网页的内容和索引,最后通过其生成的index跟这些segment已经毫无关系了。2.2 抓取目录分析抓取后一共生成5个文件夹,分别是:l crawldb目录存放下载的URL,以及下载的日期,用来页面更新检查时间.l linkdb目录存放URL的互联关系,是下载完成后分析得到的.l segments:存放抓取的页面,下面子目录的个数于获取的页面层数有关系,通常每一层页面会独立存放一个子目录,子目录名称为时间,便于管理.比如我这只抓取了一层页面就只生成了20090508173137目录.每个子目录里又有6个子文件夹如下: content:

7、每个下载页面的内容。 crawl_fetch:每个下载URL的状态。 crawl_generate:待下载URL集合。 crawl_parse:包含来更新crawldb的外部链接库。 parse_data:包含每个URL解析出的外部链接和元数据 parse_text:包含每个解析过的URL的文本内容。l indexs:存放每次下载的独立索引目录l index:符合Lucene格式的索引目录,是indexs里所有index合并后的完整索引2.3 抓取过程概述引用到的类主要有以下9个:1、nutch.crawl.Inject用来给抓取数据库添加URL的插入器2、nutch.crawl.Genera

8、tor用来生成待下载任务列表的生成器3、nutch.fetcher.Fetcher完成抓取特定页面的抓取器4、nutch.parse.ParseSegment负责内容提取和对下级URL提取的内容进行解析的解析器5、nutch.crawl.CrawlDb负责数据库管理的数据库管理工具6、nutch.crawl.LinkDb负责链接管理7、nutch.indexer.Indexer负责创建索引的索引器8、nutch.indexer.DeleteDuplicates删除重复数据9、nutch.indexer.IndexMerger对当前下载内容局部索引和历史索引进行合并的索引合并器2.4 抓取过程分

9、析Crawler的工作原理主要是:首先Crawler根据WebDB生成一个待抓取网页的URL集合叫做Fetchlist,接着下载线程Fetcher开始根据 Fetchlist将网页抓取回来,如果下载线程有很多个,那么就生成很多个Fetchlist,也就是一个Fetcher对应一个Fetchlist。然后Crawler根据抓取回来的网页WebDB进行更新,根据更新后的WebDB生成新的Fetchlist,里面是未抓取的或者新发现的URLs,然后下一轮抓取循环重新开始。这个循环过程可以叫做“产生/抓取/更新”循环。指向同一个主机上Web资源的URLs通常被分配到同一个Fetchlist中,这样的话

10、防止过多的Fetchers对一个主机同时进行抓取造成主机负担过重。另外Nutch遵守Robots Exclusion Protocol,网站可以通过自定义Robots.txt控制Crawler的抓取。在Nutch中,Crawler操作的实现是通过一系列子操作的实现来完成的。这些子操作Nutch都提供了子命令行可以单独进行调用。下面就是这些子操作的功能描述以及命令行,命令行在括号中。1. 创建一个新的WebDb (admin db -create).2. 将抓取起始URLs写入WebDB中 (inject).3. 根据WebDB生成fetchlist并写入相应的segment(generate)

11、.4. 根据fetchlist中的URL抓取网页 (fetch).5. 根据抓取网页更新WebDb (updatedb).6. 循环进行35步直至预先设定的抓取深度。7. 分析链接关系,生成反向链接.(此步1.0特有,具体作用?)8. 对所抓取的网页进行索引(index).9. 在索引中丢弃有重复内容的网页和重复的URLs (dedup).10. 将segments中的索引进行合并生成用于检索的最终index(merge).Crawler详细工作流程是:在创建一个WebDB之后(步骤1), “产生/抓取/更新”循环(步骤36)根据一些种子URLs开始启动。当这个循环彻底结束,Crawler根据

12、抓取中生成的segments创建索引(步骤810)。在进行重复URLs清除(步骤9)之前,每个segment的索引都是独立的(步骤8)。最终,各个独立的segment索引被合并为一个最终的索引index(步骤10)。其中有一个细节问题,Dedup操作主要用于清除segment索引中的重复URLs,但是我们知道,在WebDB中是不允许重复的URL存在的,那么为什么这里还要进行清除呢?原因在于抓取的更新。比方说一个月之前你抓取过这些网页,一个月后为了更新进行了重新抓取,那么旧的segment在没有删除之前仍然起作用,这个时候就需要在新旧segment之间进行除重。下边是在Crawl类设置断点调试每

13、个方法的结果.2.4.1 inject方法描述:初始化爬取的crawldb,读取URL配置文件,把内容注入爬取数据库.首先会找到读取URL配置文件的目录urls.如果没创建此目录,nutch1.0下会报错.得到hadoop处理的临时文件夹:/tmp/hadoop-Administrator/mapred/日志信息如下:2009-05-08 15:41:36,640 INFO Injector - Injector: starting2009-05-08 15:41:37,031 INFO Injector - Injector: crawlDb: 20090508/crawldb2009-05

14、-08 15:41:37,781 INFO Injector - Injector: urlDir: urls接着设置一些初始化信息.调用hadoop包JobClient.runJob方法,跟踪进入JobClient下的submitJob方法进行提交整个过程.具体原理又涉及到另一个开源项目hadoop的分析,它包括了复杂的MapReduce架构,此处不做分析。查看submitJob方法,首先获得jobid,执行configureCommandLineOptions方法后会在上边的临时文件夹生成一个system文件夹,同时在它下边生成一个job_local_0001文件夹.执行writeSpli

15、tsFile后在job_local_0001下生成job.split文件.执行writeXml写入job.xml,然后执行jobSubmitClient.submitJob正式提交整个job流程,日志如下:2009-05-08 15:41:36,640 INFO Injector - Injector: starting2009-05-08 15:41:37,031 INFO Injector - Injector: crawlDb: 20090508/crawldb2009-05-08 15:41:37,781 INFO Injector - Injector: urlDir: urls20

16、09-05-08 15:52:41,734 INFO Injector - Injector: Converting injected urls to crawl db entries.2009-05-08 15:56:22,203 INFO JvmMetrics - Initializing JVM Metrics with processName=JobTracker, sessionId=2009-05-08 16:08:20,796 WARN JobClient - Use GenericOptionsParser for parsing the arguments. Applicat

17、ions should implement Tool for the same.2009-05-08 16:08:20,984 WARN JobClient - No job jar file set. User classes may not be found. See JobConf(Class) or JobConf#setJar(String).2009-05-08 16:24:42,593 INFO FileInputFormat - Total input paths to process : 12009-05-08 16:38:29,437 INFO FileInputForma

18、t - Total input paths to process : 12009-05-08 16:38:29,546 INFO MapTask - numReduceTasks: 12009-05-08 16:38:29,562 INFO MapTask - io.sort.mb = 1002009-05-08 16:38:29,687 INFO MapTask - data buffer = 79691776/996147202009-05-08 16:38:29,687 INFO MapTask - record buffer = 262144/3276802009-05-08 16:3

19、8:29,718 INFO PluginRepository - Plugins: looking in: D:workworkspacenutch_crawlbinplugins2009-05-08 16:38:29,921 INFO PluginRepository - Plugin Auto-activation mode: true2009-05-08 16:38:29,921 INFO PluginRepository - Registered Plugins:2009-05-08 16:38:29,921 INFO PluginRepository - the nutch core

20、 extension points (nutch-extensionpoints)2009-05-08 16:38:29,921 INFO PluginRepository - Basic Query Filter (query-basic)2009-05-08 16:38:29,921 INFO PluginRepository - Basic URL Normalizer (urlnormalizer-basic)2009-05-08 16:38:29,921 INFO PluginRepository - Basic Indexing Filter (index-basic)2009-0

21、5-08 16:38:29,921 INFO PluginRepository - Html Parse Plug-in (parse-html)2009-05-08 16:38:29,921 INFO PluginRepository - Site Query Filter (query-site)2009-05-08 16:38:29,921 INFO PluginRepository - Basic Summarizer Plug-in (summary-basic)2009-05-08 16:38:29,921 INFO PluginRepository - HTTP Framewor

22、k (lib-http)2009-05-08 16:38:29,921 INFO PluginRepository - Text Parse Plug-in (parse-text)2009-05-08 16:38:29,921 INFO PluginRepository - Pass-through URL Normalizer (urlnormalizer-pass)2009-05-08 16:38:29,921 INFO PluginRepository - Regex URL Filter (urlfilter-regex)2009-05-08 16:38:29,921 INFO Pl

23、uginRepository - Http Protocol Plug-in (protocol-http)2009-05-08 16:38:29,921 INFO PluginRepository - XML Response Writer Plug-in (response-xml)2009-05-08 16:38:29,921 INFO PluginRepository - Regex URL Normalizer (urlnormalizer-regex)2009-05-08 16:38:29,921 INFO PluginRepository - OPIC Scoring Plug-

24、in (scoring-opic)2009-05-08 16:38:29,921 INFO PluginRepository - CyberNeko HTML Parser (lib-nekohtml)2009-05-08 16:38:29,921 INFO PluginRepository - Anchor Indexing Filter (index-anchor)2009-05-08 16:38:29,921 INFO PluginRepository - JavaScript Parser (parse-js)2009-05-08 16:38:29,921 INFO PluginRep

25、ository - URL Query Filter (query-url)2009-05-08 16:38:29,921 INFO PluginRepository - Regex URL Filter Framework (lib-regex-filter)2009-05-08 16:38:29,921 INFO PluginRepository - JSON Response Writer Plug-in (response-json)2009-05-08 16:38:29,921 INFO PluginRepository - Registered Extension-Points:2

26、009-05-08 16:38:29,921 INFO PluginRepository - Nutch Summarizer (org.apache.nutch.searcher.Summarizer)2009-05-08 16:38:29,921 INFO PluginRepository - Nutch Protocol (org.apache.nutch.protocol.Protocol)2009-05-08 16:38:29,921 INFO PluginRepository - Nutch Analysis (org.apache.nutch.analysis.NutchAnal

27、yzer)2009-05-08 16:38:29,921 INFO PluginRepository - Nutch Field Filter (org.apache.nutch.indexer.field.FieldFilter)2009-05-08 16:38:29,921 INFO PluginRepository - HTML Parse Filter (org.apache.nutch.parse.HtmlParseFilter)2009-05-08 16:38:29,921 INFO PluginRepository - Nutch Query Filter (org.apache

28、.nutch.searcher.QueryFilter)2009-05-08 16:38:29,921 INFO PluginRepository - Nutch Search Results Response Writer (org.apache.nutch.searcher.response.ResponseWriter)2009-05-08 16:38:29,921 INFO PluginRepository - Nutch URL Normalizer (.URLNormalizer)2009-05-08 16:38:29,921 INFO PluginRepository - Nut

29、ch URL Filter (.URLFilter)2009-05-08 16:38:29,921 INFO PluginRepository - Nutch Online Search Results Clustering Plugin (org.apache.nutch.clustering.OnlineClusterer)2009-05-08 16:38:29,921 INFO PluginRepository - Nutch Indexing Filter (org.apache.nutch.indexer.IndexingFilter)2009-05-08 16:38:29,921

30、INFO PluginRepository - Nutch Content Parser (org.apache.nutch.parse.Parser)2009-05-08 16:38:29,921 INFO PluginRepository - Nutch Scoring (org.apache.nutch.scoring.ScoringFilter)2009-05-08 16:38:29,921 INFO PluginRepository - Ontology Model Loader (org.apache.nutch.ontology.Ontology)2009-05-08 16:38

31、:29,968 INFO Configuration - found resource crawl-urlfilter.txt at file:/D:/work/workspace/nutch_crawl/bin/crawl-urlfilter.txt2009-05-08 16:38:29,984 WARN RegexURLNormalizer - cant find rules for scope inject, using default2009-05-08 16:38:29,984 INFO MapTask - Starting flush of map output2009-05-08

32、 16:38:30,203 INFO MapTask - Finished spill 02009-05-08 16:38:30,203 INFO TaskRunner - Task:attempt_local_0001_m_000000_0 is done. And is in the process of commiting2009-05-08 16:38:30,218 INFO LocalJobRunner - file:/D:/work/workspace/nutch_crawl/urls/site.txt:0+192009-05-08 16:38:30,218 INFO TaskRu

33、nner - Task attempt_local_0001_m_000000_0 done.2009-05-08 16:38:30,234 INFO LocalJobRunner - 2009-05-08 16:38:30,250 INFO Merger - Merging 1 sorted segments2009-05-08 16:38:30,265 INFO Merger - Down to the last merge-pass, with 1 segments left of total size: 53 bytes2009-05-08 16:38:30,265 INFO Loca

34、lJobRunner - 2009-05-08 16:38:30,390 INFO TaskRunner - Task:attempt_local_0001_r_000000_0 is done. And is in the process of commiting2009-05-08 16:38:30,390 INFO LocalJobRunner - 2009-05-08 16:38:30,390 INFO TaskRunner - Task attempt_local_0001_r_000000_0 is allowed to commit now2009-05-08 16:38:30,

35、406 INFO FileOutputCommitter - Saved output of task attempt_local_0001_r_000000_0 to file:/tmp/hadoop-Administrator/mapred/temp/inject-temp-4741923042009-05-08 16:38:30,406 INFO LocalJobRunner - reduce reduce2009-05-08 16:38:30,406 INFO TaskRunner - Task attempt_local_0001_r_000000_0 done.执行完后返回的run

36、ning值如下:Job: job_local_0001file: file:/tmp/hadoop-Administrator/mapred/system/job_local_0001/job.xmltracking URL: http:/localhost:8080/2009-05-08 16:47:14,093 INFO JobClient - Running job: job_local_00012009-05-08 16:49:51,859 INFO JobClient - Job complete: job_local_00012009-05-08 16:51:36,062 INFO

37、 JobClient - Counters: 112009-05-08 16:51:36,062 INFO JobClient - File Systems2009-05-08 16:51:36,062 INFO JobClient - Local bytes read=515912009-05-08 16:51:36,062 INFO JobClient - Local bytes written=1043372009-05-08 16:51:36,062 INFO JobClient - Map-Reduce Framework2009-05-08 16:51:36,062 INFO Jo

38、bClient - Reduce input groups=12009-05-08 16:51:36,062 INFO JobClient - Combine output records=02009-05-08 16:51:36,062 INFO JobClient - Map input records=12009-05-08 16:51:36,062 INFO JobClient - Reduce output records=12009-05-08 16:51:36,062 INFO JobClient - Map output bytes=492009-05-08 16:51:36,

39、062 INFO JobClient - Map input bytes=192009-05-08 16:51:36,062 INFO JobClient - Combine input records=02009-05-08 16:51:36,062 INFO JobClient - Map output records=12009-05-08 16:51:36,062 INFO JobClient - Reduce input records=1至此第一个runJob方法执行结束.总结:待写接下来就是生成crawldb文件夹,并把urls合并注入到它的里面.JobClient.runJob

40、(mergeJob);CrawlDb.install(mergeJob, crawlDb);这个过程首先会在前面提到的临时文件夹下生成job_local_0002目录,和上边一样同样会生成job.split和job.xml,接着完成crawldb的创建,最后删除临时文件夹temp下的文件.至此inject过程结束.最后部分日志如下:2009-05-08 17:03:57,250 INFO Injector - Injector: Merging injected urls into crawl db.2009-05-08 17:10:01,015 INFO JvmMetrics - Canno

41、t initialize JVM Metrics with processName=JobTracker, sessionId= - already initialized2009-05-08 17:10:15,953 WARN JobClient - Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.2009-05-08 17:10:16,156 WARN JobClient - No job jar file set. User classe

42、s may not be found. See JobConf(Class) or JobConf#setJar(String).2009-05-08 17:12:15,296 INFO FileInputFormat - Total input paths to process : 12009-05-08 17:13:40,296 INFO FileInputFormat - Total input paths to process : 12009-05-08 17:13:40,406 INFO MapTask - numReduceTasks: 12009-05-08 17:13:40,4

43、06 INFO MapTask - io.sort.mb = 1002009-05-08 17:13:40,515 INFO MapTask - data buffer = 79691776/996147202009-05-08 17:13:40,515 INFO MapTask - record buffer = 262144/3276802009-05-08 17:13:40,546 INFO MapTask - Starting flush of map output2009-05-08 17:13:40,765 INFO MapTask - Finished spill 02009-0

44、5-08 17:13:40,765 INFO TaskRunner - Task:attempt_local_0002_m_000000_0 is done. And is in the process of commiting2009-05-08 17:13:40,765 INFO LocalJobRunner - file:/tmp/hadoop-Administrator/mapred/temp/inject-temp-474192304/part-00000:0+1432009-05-08 17:13:40,765 INFO TaskRunner - Task attempt_loca

45、l_0002_m_000000_0 done.2009-05-08 17:13:40,796 INFO LocalJobRunner - 2009-05-08 17:13:40,796 INFO Merger - Merging 1 sorted segments2009-05-08 17:13:40,796 INFO Merger - Down to the last merge-pass, with 1 segments left of total size: 53 bytes2009-05-08 17:13:40,796 INFO LocalJobRunner - 2009-05-08

46、17:13:40,906 WARN NativeCodeLoader - Unable to load native-hadoop library for your platform. using builtin-java classes where applicable2009-05-08 17:13:40,906 INFO CodecPool - Got brand-new compressor2009-05-08 17:13:40,906 INFO TaskRunner - Task:attempt_local_0002_r_000000_0 is done. And is in the

47、 process of commiting2009-05-08 17:13:40,906 INFO LocalJobRunner - 2009-05-08 17:13:40,906 INFO TaskRunner - Task attempt_local_0002_r_000000_0 is allowed to commit now2009-05-08 17:13:40,921 INFO FileOutputCommitter - Saved output of task attempt_local_0002_r_000000_0 to file:/D:/work/workspace/nut

48、ch_crawl/20090508/crawldb/18965677452009-05-08 17:13:40,921 INFO LocalJobRunner - reduce reduce2009-05-08 17:13:40,937 INFO TaskRunner - Task attempt_local_0002_r_000000_0 done.2009-05-08 17:13:46,781 INFO JobClient - Running job: job_local_00022009-05-08 17:14:55,125 INFO JobClient - Job complete:

49、job_local_00022009-05-08 17:14:59,328 INFO JobClient - Counters: 112009-05-08 17:14:59,328 INFO JobClient - File Systems2009-05-08 17:14:59,328 INFO JobClient - Local bytes read=1038752009-05-08 17:14:59,328 INFO JobClient - Local bytes written=2093852009-05-08 17:14:59,328 INFO JobClient - Map-Redu

50、ce Framework2009-05-08 17:14:59,328 INFO JobClient - Reduce input groups=12009-05-08 17:14:59,328 INFO JobClient - Combine output records=02009-05-08 17:14:59,328 INFO JobClient - Map input records=12009-05-08 17:14:59,328 INFO JobClient - Reduce output records=12009-05-08 17:14:59,328 INFO JobClien

51、t - Map output bytes=492009-05-08 17:14:59,328 INFO JobClient - Map input bytes=572009-05-08 17:14:59,328 INFO JobClient - Combine input records=02009-05-08 17:14:59,328 INFO JobClient - Map output records=12009-05-08 17:14:59,328 INFO JobClient - Reduce input records=12009-05-08 17:17:30,984 INFO J

52、vmMetrics - Cannot initialize JVM Metrics with processName=JobTracker, sessionId= - already initialized2009-05-08 17:20:02,390 INFO Injector - Injector: done2.4.2 generate方法描述:从爬取数据库中生成新的segment,然后从中生成待下载任务列表(fetchlist).LockUtil.createLockFile(fs, lock, force);首先执行上边方法后会在crawldb目录下生成.locked文件,猜测作用是防

53、止crawldb的数据被修改,真实作用有待验证.接着执行的过程和上边大同小异,可参考上边步骤,日志如下:2009-05-08 17:37:18,218 INFO Generator - Generator: Selecting best-scoring urls due for fetch.2009-05-08 17:37:18,625 INFO Generator - Generator: starting2009-05-08 17:37:18,937 INFO Generator - Generator: segment: 20090508/segments/200905081731372

54、009-05-08 17:37:19,468 INFO Generator - Generator: filtering: true2009-05-08 17:37:22,312 INFO Generator - Generator: topN: 502009-05-08 17:37:51,203 INFO Generator - Generator: jobtracker is local, generating exactly one partition.2009-05-08 17:39:57,609 INFO JvmMetrics - Cannot initialize JVM Metr

55、ics with processName=JobTracker, sessionId= - already initialized2009-05-08 17:40:05,234 WARN JobClient - Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.2009-05-08 17:40:05,406 WARN JobClient - No job jar file set. User classes may not be found. S

56、ee JobConf(Class) or JobConf#setJar(String).2009-05-08 17:40:05,437 INFO FileInputFormat - Total input paths to process : 12009-05-08 17:40:06,062 INFO FileInputFormat - Total input paths to process : 12009-05-08 17:40:06,109 INFO MapTask - numReduceTasks: 1省略插件加载日志2009-05-08 17:40:06,312 INFO Confi

57、guration - found resource crawl-urlfilter.txt at file:/D:/work/workspace/nutch_crawl/bin/crawl-urlfilter.txt2009-05-08 17:40:06,343 INFO FetchScheduleFactory - Using FetchSchedule impl: org.apache.nutch.crawl.DefaultFetchSchedule2009-05-08 17:40:06,343 INFO AbstractFetchSchedule - defaultInterval=25

58、920002009-05-08 17:40:06,343 INFO AbstractFetchSchedule - maxInterval=77760002009-05-08 17:40:06,343 INFO MapTask - io.sort.mb = 1002009-05-08 17:40:06,437 INFO MapTask - data buffer = 79691776/996147202009-05-08 17:40:06,437 INFO MapTask - record buffer = 262144/3276802009-05-08 17:40:06,453 WARN R

59、egexURLNormalizer - cant find rules for scope partition, using default2009-05-08 17:40:06,453 INFO MapTask - Starting flush of map output2009-05-08 17:40:06,625 INFO MapTask - Finished spill 02009-05-08 17:40:06,640 INFO TaskRunner - Task:attempt_local_0003_m_000000_0 is done. And is in the process

60、of commiting2009-05-08 17:40:06,640 INFO LocalJobRunner - file:/D:/work/workspace/nutch_crawl/20090508/crawldb/current/part-00000/data:0+1432009-05-08 17:40:06,640 INFO TaskRunner - Task attempt_local_0003_m_000000_0 done.2009-05-08 17:40:06,656 INFO LocalJobRunner - 2009-05-08 17:40:06,656 INFO Merger - Merging 1 sorted segments2009-05-08 17:40:06,656

展开阅读全文
温馨提示:
1: 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。图纸软件为CAD,CAXA,PROE,UG,SolidWorks等.压缩文件请下载最新的WinRAR软件解压。
2: 本站的文档不包含任何第三方提供的附件图纸等,如果需要附件,请联系上传者。文件的所有权益归上传用户所有。
3.本站RAR压缩包中若带图纸,网页内容里面会有图纸预览,若没有图纸预览就没有图纸。
4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
5. 装配图网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对用户上传分享的文档内容本身不做任何修改或编辑,并不能对任何下载内容负责。
6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。
关于我们 - 网站声明 - 网站地图 - 资源地图 - 友情链接 - 网站客服 - 联系我们

copyright@ 2023-2025  zhuangpeitu.com 装配图网版权所有   联系电话:18123376007

备案号:ICP2024067431-1 川公网安备51140202000466号


本站为文档C2C交易模式,即用户上传的文档直接被用户下载,本站只是中间服务平台,本站所有文档下载所得的收益归上传人(含作者)所有。装配图网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。若文档所含内容侵犯了您的版权或隐私,请立即通知装配图网,我们立即给予删除!