org.apache.nutch.crawl
Class CrawlDbMerger

java.lang.Object
  extended byorg.apache.hadoop.conf.Configured
      extended byorg.apache.nutch.crawl.CrawlDbMerger
All Implemented Interfaces:
Configurable

public class CrawlDbMerger
extends Configured

This tool merges several CrawlDb-s into one, optionally filtering URLs through the current URLFilters, to skip prohibited pages.

It's possible to use this tool just for filtering - in that case only one CrawlDb should be specified in arguments.

If more than one CrawlDb contains information about the same URL, only the most recent version is retained, as determined by the value of CrawlDatum.getFetchTime(). However, all metadata information from all versions is accumulated, with newer values taking precedence over older values.

Author:
Andrzej Bialecki

Nested Class Summary
static class CrawlDbMerger.Merger
           
 
Constructor Summary
CrawlDbMerger(Configuration conf)
           
 
Method Summary
static JobConf createMergeJob(Configuration conf, Path output)
           
static void main(String[] args)
           
 void merge(Path output, Path[] dbs, boolean filter)
           
 
Methods inherited from class org.apache.hadoop.conf.Configured
getConf, setConf
 
Methods inherited from class java.lang.Object
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
 

Constructor Detail

CrawlDbMerger

public CrawlDbMerger(Configuration conf)
Method Detail

merge

public void merge(Path output,
                  Path[] dbs,
                  boolean filter)
           throws Exception
Throws:
Exception

createMergeJob

public static JobConf createMergeJob(Configuration conf,
                                     Path output)

main

public static void main(String[] args)
                 throws Exception
Parameters:
args -
Throws:
Exception


Copyright © 2006 The Apache Software Foundation