Java Doc for StatisticsTracking.java in  » Web-Crawler » heritrix » org » archive » crawler » framework » Java Source Code / Java DocumentationJava Source Code and Java Documentation

Java Source Code / Java Documentation
1. 6.0 JDK Core
2. 6.0 JDK Modules
3. 6.0 JDK Modules com.sun
4. 6.0 JDK Modules com.sun.java
5. 6.0 JDK Modules sun
6. 6.0 JDK Platform
7. Ajax
8. Apache Harmony Java SE
9. Aspect oriented
10. Authentication Authorization
11. Blogger System
12. Build
13. Byte Code
14. Cache
15. Chart
16. Chat
17. Code Analyzer
18. Collaboration
19. Content Management System
20. Database Client
21. Database DBMS
22. Database JDBC Connection Pool
23. Database ORM
24. Development
25. EJB Server geronimo
26. EJB Server GlassFish
27. EJB Server JBoss 4.2.1
28. EJB Server resin 3.1.5
29. ERP CRM Financial
30. ESB
31. Forum
32. GIS
33. Graphic Library
34. Groupware
35. HTML Parser
36. IDE
37. IDE Eclipse
38. IDE Netbeans
39. Installer
40. Internationalization Localization
41. Inversion of Control
42. Issue Tracking
43. J2EE
44. JBoss
45. JMS
46. JMX
47. Library
48. Mail Clients
49. Net
50. Parser
51. PDF
52. Portal
53. Profiler
54. Project Management
55. Report
56. RSS RDF
57. Rule Engine
58. Science
59. Scripting
60. Search Engine
61. Security
62. Sevlet Container
63. Source Control
64. Swing Library
65. Template Engine
66. Test Coverage
67. Testing
68. UML
69. Web Crawler
70. Web Framework
71. Web Mail
72. Web Server
73. Web Services
74. Web Services apache cxf 2.0.1
75. Web Services AXIS2
76. Wiki Engine
77. Workflow Engines
78. XML
79. XML UI
Java
Java Tutorial
Java Open Source
Jar File Download
Java Articles
Java Products
Java by API
Photoshop Tutorials
Maya Tutorials
Flash Tutorials
3ds-Max Tutorials
Illustrator Tutorials
GIMP Tutorials
C# / C Sharp
C# / CSharp Tutorial
C# / CSharp Open Source
ASP.Net
ASP.NET Tutorial
JavaScript DHTML
JavaScript Tutorial
JavaScript Reference
HTML / CSS
HTML CSS Reference
C / ANSI-C
C Tutorial
C++
C++ Tutorial
Ruby
PHP
Python
Python Tutorial
Python Open Source
SQL Server / T-SQL
SQL Server / T-SQL Tutorial
Oracle PL / SQL
Oracle PL/SQL Tutorial
PostgreSQL
SQL / MySQL
MySQL Tutorial
VB.Net
VB.Net Tutorial
Flash / Flex / ActionScript
VBA / Excel / Access / Word
XML
XML Tutorial
Microsoft Office PowerPoint 2007 Tutorial
Microsoft Office Excel 2007 Tutorial
Microsoft Office Word 2007 Tutorial
Java Source Code / Java Documentation » Web Crawler » heritrix » org.archive.crawler.framework 
Source Cross Reference  Class Diagram Java Document (Java Doc) 


org.archive.crawler.framework.StatisticsTracking

All known Subclasses:   org.archive.crawler.framework.AbstractTracker,
StatisticsTracking
public interface StatisticsTracking extends Runnable(Code)
An interface for objects that want to collect statistics on running crawls. An implementation of this is referenced in the crawl order and loaded when the crawl begins.

It will be given a reference to the relevant CrawlController. The CrawlController will contain any additional configuration information needed.

Any class that implements this interface can be specified as a statistics tracker in a crawl order. The CrawlController will then create and initialize a copy of it and call it's start() method.

This interface also specifies several methods to access data that the CrawlController or the URIFrontier may be interested in at run time but do not want to have keep track of for themselves. org.archive.crawler.framework.AbstractTracker AbstractTracker implements these. If there are more then one StatisticsTracking classes defined in the crawl order only the first one will be used to access this data.

It is recommended that it register for org.archive.crawler.event.CrawlStatusListener CrawlStatus events and org.archive.crawler.event.CrawlURIDispositionListener CrawlURIDisposition events to be able to properly monitor a crawl. Both are registered with the CrawlController.
author:
   Kristinn Sigurdsson
See Also:   AbstractTracker
See Also:   org.archive.crawler.event.CrawlStatusListener
See Also:   org.archive.crawler.event.CrawlURIDispositionListener
See Also:   org.archive.crawler.framework.CrawlController



Field Summary
final public static  StringSEED_DISPOSITION_DISREGARD
    
final public static  StringSEED_DISPOSITION_FAILURE
    
final public static  StringSEED_DISPOSITION_NOT_PROCESSED
    
final public static  StringSEED_DISPOSITION_RETRY
    
final public static  StringSEED_DISPOSITION_SUCCESS
    


Method Summary
public  intactiveThreadCount()
     Get the number of active (non-paused) threads.
public  longaverageDepth()
    
public  floatcongestionRatio()
    
public  longcrawlDuration()
     Returns how long the current crawl has been running (excluding any time spent paused/suspended/stopped) since it began.
public  doublecurrentProcessedDocsPerSec()
     Returns an estimate of recent document download rates based on a queue of recently seen CrawlURIs (as of last snapshot).
public  intcurrentProcessedKBPerSec()
     Calculates an estimate of the rate, in kb, at which documents are currently being processed by the crawler.
public  longdeepestUri()
    
public  longgetCrawlerTotalElapsedTime()
     Total amount of time spent actively crawling so far.

Returns the total amount of time (in milliseconds) that has elapsed from the start of the crawl and until the current time or if the crawl has ended until the the end of the crawl minus any time spent paused.

public  MapgetProgressStatistics()
    
public  StringgetProgressStatisticsLine()
    
public  IteratorgetSeedRecordsSortedByStatusCode()
     Get a SeedRecord iterator for the job being monitored.
public  voidinitialize(CrawlController c)
     Do initialization.
public  voidnoteStart()
     Start the tracker's crawl timing.
public  doubleprocessedDocsPerSec()
    
public  longprocessedKBPerSec()
    
public  StringprogressStatisticsLegend()
    
public  longsuccessfullyFetchedCount()
     Number of successfully processed URIs.
public  longtotalBytesCrawled()
     Returns the total number of uncompressed bytes crawled.
public  longtotalBytesWritten()
     Returns the total number of uncompressed bytes processed.
public  longtotalCount()
    

Field Detail
SEED_DISPOSITION_DISREGARD
final public static String SEED_DISPOSITION_DISREGARD(Code)
Seed was disregarded



SEED_DISPOSITION_FAILURE
final public static String SEED_DISPOSITION_FAILURE(Code)
Failed to crawl seed



SEED_DISPOSITION_NOT_PROCESSED
final public static String SEED_DISPOSITION_NOT_PROCESSED(Code)
Seed has not been processed



SEED_DISPOSITION_RETRY
final public static String SEED_DISPOSITION_RETRY(Code)
Failed to crawl seed, will retry



SEED_DISPOSITION_SUCCESS
final public static String SEED_DISPOSITION_SUCCESS(Code)
Seed successfully crawled





Method Detail
activeThreadCount
public int activeThreadCount()(Code)
Get the number of active (non-paused) threads. The number of active (non-paused) threads



averageDepth
public long averageDepth()(Code)



congestionRatio
public float congestionRatio()(Code)



crawlDuration
public long crawlDuration()(Code)
Returns how long the current crawl has been running (excluding any time spent paused/suspended/stopped) since it began. The length of time - in msec - that this crawl has been running.



currentProcessedDocsPerSec
public double currentProcessedDocsPerSec()(Code)
Returns an estimate of recent document download rates based on a queue of recently seen CrawlURIs (as of last snapshot). The rate per second of documents gathered during the lastsnapshot



currentProcessedKBPerSec
public int currentProcessedKBPerSec()(Code)
Calculates an estimate of the rate, in kb, at which documents are currently being processed by the crawler. For more accurate estimates set a larger queue size, or get and average multiple values (as of last snapshot). The rate per second of KB gathered during the last snapshot



deepestUri
public long deepestUri()(Code)



getCrawlerTotalElapsedTime
public long getCrawlerTotalElapsedTime()(Code)
Total amount of time spent actively crawling so far.

Returns the total amount of time (in milliseconds) that has elapsed from the start of the crawl and until the current time or if the crawl has ended until the the end of the crawl minus any time spent paused. Total amount of time (in msec.) spent crawling so far.




getProgressStatistics
public Map getProgressStatistics()(Code)
Map of progress-statistics.



getProgressStatisticsLine
public String getProgressStatisticsLine()(Code)
line of progress-statistics



getSeedRecordsSortedByStatusCode
public Iterator getSeedRecordsSortedByStatusCode()(Code)
Get a SeedRecord iterator for the job being monitored. If job is no longer running, stored values will be returned. If job is running, current seed iterator will be fetched and stored values will be updated.

Sort order is:
No status code (not processed)
Status codes smaller then 0 (largest to smallest)
Status codes larger then 0 (largest to smallest)

Note: This iterator will iterate over a list of SeedRecords. the seed iterator




initialize
public void initialize(CrawlController c) throws FatalConfigurationException(Code)
Do initialization. The CrawlController will call this method before calling the start() method.
Parameters:
  c - The CrawlController CrawlController running the crawlthat this class is to gather statistics on.
throws:
  FatalConfigurationException -



noteStart
public void noteStart()(Code)
Start the tracker's crawl timing.



processedDocsPerSec
public double processedDocsPerSec()(Code)
Returns the number of documents that have been processed per second over the life of the crawl (as of last snapshot) The rate per second of documents gathered so far



processedKBPerSec
public long processedKBPerSec()(Code)
Calculates the rate that data, in kb, has been processed over the life of the crawl (as of last snapshot.) The rate per second of KB gathered so far



progressStatisticsLegend
public String progressStatisticsLegend()(Code)
legend of progress-statistics



successfullyFetchedCount
public long successfullyFetchedCount()(Code)
Number of successfully processed URIs.

If crawl not running (paused or stopped) this will return the value of the last snapshot. The number of successully fetched URIs
See Also:   org.archive.crawler.framework.Frontier.succeededFetchCount




totalBytesCrawled
public long totalBytesCrawled()(Code)
Returns the total number of uncompressed bytes crawled. Stored data may be much smaller due to compression or duplicate-reduction policies. The total number of uncompressed bytes crawled



totalBytesWritten
public long totalBytesWritten()(Code)
Returns the total number of uncompressed bytes processed. Stored data may be much smaller due to compression or duplicate-reduction policies. The total number of uncompressed bytes written to disk



totalCount
public long totalCount()(Code)
Total number of URIs (processed + queued +currently being processed)



www.java2java.com | Contact Us
Copyright 2009 - 12 Demo Source and Support. All rights reserved.
All other trademarks are property of their respective owners.