Source Code Cross Referenced for AbstractConcurrentReadCache.java in  » Cache » OSCache » com » opensymphony » oscache » base » algorithm » Java Source Code / Java DocumentationJava Source Code and Java Documentation

Java Source Code / Java Documentation
1. 6.0 JDK Core
2. 6.0 JDK Modules
3. 6.0 JDK Modules com.sun
4. 6.0 JDK Modules com.sun.java
5. 6.0 JDK Modules sun
6. 6.0 JDK Platform
7. Ajax
8. Apache Harmony Java SE
9. Aspect oriented
10. Authentication Authorization
11. Blogger System
12. Build
13. Byte Code
14. Cache
15. Chart
16. Chat
17. Code Analyzer
18. Collaboration
19. Content Management System
20. Database Client
21. Database DBMS
22. Database JDBC Connection Pool
23. Database ORM
24. Development
25. EJB Server geronimo
26. EJB Server GlassFish
27. EJB Server JBoss 4.2.1
28. EJB Server resin 3.1.5
29. ERP CRM Financial
30. ESB
31. Forum
32. GIS
33. Graphic Library
34. Groupware
35. HTML Parser
36. IDE
37. IDE Eclipse
38. IDE Netbeans
39. Installer
40. Internationalization Localization
41. Inversion of Control
42. Issue Tracking
43. J2EE
44. JBoss
45. JMS
46. JMX
47. Library
48. Mail Clients
49. Net
50. Parser
51. PDF
52. Portal
53. Profiler
54. Project Management
55. Report
56. RSS RDF
57. Rule Engine
58. Science
59. Scripting
60. Search Engine
61. Security
62. Sevlet Container
63. Source Control
64. Swing Library
65. Template Engine
66. Test Coverage
67. Testing
68. UML
69. Web Crawler
70. Web Framework
71. Web Mail
72. Web Server
73. Web Services
74. Web Services apache cxf 2.0.1
75. Web Services AXIS2
76. Wiki Engine
77. Workflow Engines
78. XML
79. XML UI
Java
Java Tutorial
Java Open Source
Jar File Download
Java Articles
Java Products
Java by API
C# / C Sharp
C# / CSharp Tutorial
ASP.Net
JavaScript DHTML
JavaScript Tutorial
JavaScript Reference
HTML / CSS
HTML CSS Reference
C / ANSI-C
C Tutorial
C++
C++ Tutorial
PHP
Python
SQL Server / T-SQL
Oracle PL / SQL
Oracle PL/SQL Tutorial
PostgreSQL
SQL / MySQL
MySQL Tutorial
VB.Net
VB.Net Tutorial
Flash / Flex / ActionScript
VBA / Excel / Access / Word
Microsoft Office PowerPoint 2007 Tutorial
Microsoft Office Excel 2007 Tutorial
Microsoft Office Word 2007 Tutorial
Java Source Code / Java Documentation » Cache » OSCache » com.opensymphony.oscache.base.algorithm 
Source Cross Referenced  Class Diagram Java Document (Java Doc) 


0001:        /*
0002:         * Copyright (c) 2002-2003 by OpenSymphony
0003:         * All rights reserved.
0004:         */
0005:        /*
0006:         File: AbstractConcurrentReadCache
0007:
0008:         Written by Doug Lea. Adapted from JDK1.2 HashMap.java and Hashtable.java
0009:         which carries the following copyright:
0010:
0011:         * Copyright 1997 by Sun Microsystems, Inc.,
0012:         * 901 San Antonio Road, Palo Alto, California, 94303, U.S.A.
0013:         * All rights reserved.
0014:         *
0015:         * This software is the confidential and proprietary information
0016:         * of Sun Microsystems, Inc. ("Confidential Information").  You
0017:         * shall not disclose such Confidential Information and shall use
0018:         * it only in accordance with the terms of the license agreement
0019:         * you entered into with Sun.
0020:
0021:         This class is a modified version of ConcurrentReaderHashMap, which was written
0022:         by Doug Lea (http://gee.cs.oswego.edu/dl/). The modifications where done
0023:         by Pyxis Technologies. This is a base class for the OSCache module of the
0024:         openSymphony project (www.opensymphony.com).
0025:
0026:         History:
0027:         Date       Who                What
0028:         28oct1999  dl               Created
0029:         14dec1999  dl               jmm snapshot
0030:         19apr2000  dl               use barrierLock
0031:         12jan2001  dl               public release
0032:         Oct2001    abergevin@pyxis-tech.com
0033:         Integrated persistence and outer algorithm support
0034:         */
0035:        package com.opensymphony.oscache.base.algorithm;
0036:
0037:        /** OpenSymphony BEGIN */
0038:        import com.opensymphony.oscache.base.CacheEntry;
0039:        import com.opensymphony.oscache.base.persistence.CachePersistenceException;
0040:        import com.opensymphony.oscache.base.persistence.PersistenceListener;
0041:
0042:        import org.apache.commons.logging.Log;
0043:        import org.apache.commons.logging.LogFactory;
0044:
0045:        import java.io.IOException;
0046:        import java.io.Serializable;
0047:
0048:        import java.util.*;
0049:
0050:        /**
0051:         * A version of Hashtable that supports mostly-concurrent reading, but exclusive writing.
0052:         * Because reads are not limited to periods
0053:         * without writes, a concurrent reader policy is weaker than a classic
0054:         * reader/writer policy, but is generally faster and allows more
0055:         * concurrency. This class is a good choice especially for tables that
0056:         * are mainly created by one thread during the start-up phase of a
0057:         * program, and from then on, are mainly read (with perhaps occasional
0058:         * additions or removals) in many threads.  If you also need concurrency
0059:         * among writes, consider instead using ConcurrentHashMap.
0060:         * <p>
0061:         *
0062:         * Successful retrievals using get(key) and containsKey(key) usually
0063:         * run without locking. Unsuccessful ones (i.e., when the key is not
0064:         * present) do involve brief synchronization (locking).  Also, the
0065:         * size and isEmpty methods are always synchronized.
0066:         *
0067:         * <p> Because retrieval operations can ordinarily overlap with
0068:         * writing operations (i.e., put, remove, and their derivatives),
0069:         * retrievals can only be guaranteed to return the results of the most
0070:         * recently <em>completed</em> operations holding upon their
0071:         * onset. Retrieval operations may or may not return results
0072:         * reflecting in-progress writing operations.  However, the retrieval
0073:         * operations do always return consistent results -- either those
0074:         * holding before any single modification or after it, but never a
0075:         * nonsense result.  For aggregate operations such as putAll and
0076:         * clear, concurrent reads may reflect insertion or removal of only
0077:         * some entries. In those rare contexts in which you use a hash table
0078:         * to synchronize operations across threads (for example, to prevent
0079:         * reads until after clears), you should either encase operations
0080:         * in synchronized blocks, or instead use java.util.Hashtable.
0081:         *
0082:         * <p>
0083:         *
0084:         * This class also supports optional guaranteed
0085:         * exclusive reads, simply by surrounding a call within a synchronized
0086:         * block, as in <br>
0087:         * <code>AbstractConcurrentReadCache t; ... Object v; <br>
0088:         * synchronized(t) { v = t.get(k); } </code> <br>
0089:         *
0090:         * But this is not usually necessary in practice. For
0091:         * example, it is generally inefficient to write:
0092:         *
0093:         * <pre>
0094:         *   AbstractConcurrentReadCache t; ...            // Inefficient version
0095:         *   Object key; ...
0096:         *   Object value; ...
0097:         *   synchronized(t) {
0098:         *     if (!t.containsKey(key))
0099:         *       t.put(key, value);
0100:         *       // other code if not previously present
0101:         *     }
0102:         *     else {
0103:         *       // other code if it was previously present
0104:         *     }
0105:         *   }
0106:         *</pre>
0107:         * Instead, just take advantage of the fact that put returns
0108:         * null if the key was not previously present:
0109:         * <pre>
0110:         *   AbstractConcurrentReadCache t; ...                // Use this instead
0111:         *   Object key; ...
0112:         *   Object value; ...
0113:         *   Object oldValue = t.put(key, value);
0114:         *   if (oldValue == null) {
0115:         *     // other code if not previously present
0116:         *   }
0117:         *   else {
0118:         *     // other code if it was previously present
0119:         *   }
0120:         *</pre>
0121:         * <p>
0122:         *
0123:         * Iterators and Enumerations (i.e., those returned by
0124:         * keySet().iterator(), entrySet().iterator(), values().iterator(),
0125:         * keys(), and elements()) return elements reflecting the state of the
0126:         * hash table at some point at or since the creation of the
0127:         * iterator/enumeration.  They will return at most one instance of
0128:         * each element (via next()/nextElement()), but might or might not
0129:         * reflect puts and removes that have been processed since they were
0130:         * created.  They do <em>not</em> throw ConcurrentModificationException.
0131:         * However, these iterators are designed to be used by only one
0132:         * thread at a time. Sharing an iterator across multiple threads may
0133:         * lead to unpredictable results if the table is being concurrently
0134:         * modified.  Again, you can ensure interference-free iteration by
0135:         * enclosing the iteration in a synchronized block.  <p>
0136:         *
0137:         * This class may be used as a direct replacement for any use of
0138:         * java.util.Hashtable that does not depend on readers being blocked
0139:         * during updates. Like Hashtable but unlike java.util.HashMap,
0140:         * this class does NOT allow <tt>null</tt> to be used as a key or
0141:         * value.  This class is also typically faster than ConcurrentHashMap
0142:         * when there is usually only one thread updating the table, but
0143:         * possibly many retrieving values from it.
0144:         * <p>
0145:         *
0146:         * Implementation note: A slightly faster implementation of
0147:         * this class will be possible once planned Java Memory Model
0148:         * revisions are in place.
0149:         *
0150:         * <p>[<a href="http://gee.cs.oswego.edu/dl/classes/EDU/oswego/cs/dl/util/concurrent/intro.html"> Introduction to this package. </a>]
0151:         **/
0152:        public abstract class AbstractConcurrentReadCache extends AbstractMap
0153:                implements  Map, Cloneable, Serializable {
0154:            /**
0155:             * The default initial number of table slots for this table (32).
0156:             * Used when not otherwise specified in constructor.
0157:             **/
0158:            public static final int DEFAULT_INITIAL_CAPACITY = 32;
0159:
0160:            /**
0161:             * The minimum capacity.
0162:             * Used if a lower value is implicitly specified
0163:             * by either of the constructors with arguments.
0164:             * MUST be a power of two.
0165:             */
0166:            private static final int MINIMUM_CAPACITY = 4;
0167:
0168:            /**
0169:             * The maximum capacity.
0170:             * Used if a higher value is implicitly specified
0171:             * by either of the constructors with arguments.
0172:             * MUST be a power of two <= 1<<30.
0173:             */
0174:            private static final int MAXIMUM_CAPACITY = 1 << 30;
0175:
0176:            /**
0177:             * The default load factor for this table.
0178:             * Used when not otherwise specified in constructor, the default is 0.75f.
0179:             **/
0180:            public static final float DEFAULT_LOAD_FACTOR = 0.75f;
0181:
0182:            //OpenSymphony BEGIN (pretty long!)
0183:            protected static final String NULL = "_nul!~";
0184:
0185:            private static final Log log = LogFactory
0186:                    .getLog(AbstractConcurrentReadCache.class);
0187:
0188:            /*
0189:              The basic strategy is an optimistic-style scheme based on
0190:              the guarantee that the hash table and its lists are always
0191:              kept in a consistent enough state to be read without locking:
0192:
0193:             * Read operations first proceed without locking, by traversing the
0194:                 apparently correct list of the apparently correct bin. If an
0195:                 entry is found, but not invalidated (value field null), it is
0196:                 returned. If not found, operations must recheck (after a memory
0197:                 barrier) to make sure they are using both the right list and
0198:                 the right table (which can change under resizes). If
0199:                 invalidated, reads must acquire main update lock to wait out
0200:                 the update, and then re-traverse.
0201:
0202:             * All list additions are at the front of each bin, making it easy
0203:                 to check changes, and also fast to traverse.  Entry next
0204:                 pointers are never assigned. Remove() builds new nodes when
0205:                 necessary to preserve this.
0206:
0207:             * Remove() (also clear()) invalidates removed nodes to alert read
0208:                 operations that they must wait out the full modifications.
0209:
0210:             */
0211:
0212:            /**
0213:             * Lock used only for its memory effects. We use a Boolean
0214:             * because it is serializable, and we create a new one because
0215:             * we need a unique object for each cache instance.
0216:             **/
0217:            protected final Boolean barrierLock = new Boolean(true);
0218:
0219:            /**
0220:             * field written to only to guarantee lock ordering.
0221:             **/
0222:            protected transient Object lastWrite;
0223:
0224:            /**
0225:             * The hash table data.
0226:             */
0227:            protected transient Entry[] table;
0228:
0229:            /**
0230:             * The total number of mappings in the hash table.
0231:             */
0232:            protected transient int count;
0233:
0234:            /**
0235:             * Persistence listener.
0236:             */
0237:            protected transient PersistenceListener persistenceListener = null;
0238:
0239:            /**
0240:             * Use memory cache or not.
0241:             */
0242:            protected boolean memoryCaching = true;
0243:
0244:            /**
0245:             * Use unlimited disk caching.
0246:             */
0247:            protected boolean unlimitedDiskCache = false;
0248:
0249:            /**
0250:             * The load factor for the hash table.
0251:             *
0252:             * @serial
0253:             */
0254:            protected float loadFactor;
0255:
0256:            /**
0257:             * Default cache capacity (number of entries).
0258:             */
0259:            protected final int DEFAULT_MAX_ENTRIES = 100;
0260:
0261:            /**
0262:             * Max number of element in cache when considered unlimited.
0263:             */
0264:            protected final int UNLIMITED = 2147483646;
0265:            protected transient Collection values = null;
0266:
0267:            /**
0268:             * A HashMap containing the group information.
0269:             * Each entry uses the group name as the key, and holds a
0270:             * <code>Set</code> of containing keys of all
0271:             * the cache entries that belong to that particular group.
0272:             */
0273:            protected HashMap groups = new HashMap();
0274:            protected transient Set entrySet = null;
0275:
0276:            // Views
0277:            protected transient Set keySet = null;
0278:
0279:            /**
0280:             * Cache capacity (number of entries).
0281:             */
0282:            protected int maxEntries = DEFAULT_MAX_ENTRIES;
0283:
0284:            /**
0285:             * The table is rehashed when its size exceeds this threshold.
0286:             * (The value of this field is always (int)(capacity * loadFactor).)
0287:             *
0288:             * @serial
0289:             */
0290:            protected int threshold;
0291:
0292:            /**
0293:             * Use overflow persistence caching.
0294:             */
0295:            private boolean overflowPersistence = false;
0296:
0297:            /**
0298:             * Constructs a new, empty map with the specified initial capacity and the specified load factor.
0299:             *
0300:             * @param initialCapacity the initial capacity
0301:             *  The actual initial capacity is rounded to the nearest power of two.
0302:             * @param loadFactor  the load factor of the AbstractConcurrentReadCache
0303:             * @throws IllegalArgumentException  if the initial maximum number
0304:             *               of elements is less
0305:             *               than zero, or if the load factor is nonpositive.
0306:             */
0307:            public AbstractConcurrentReadCache(int initialCapacity,
0308:                    float loadFactor) {
0309:                if (loadFactor <= 0) {
0310:                    throw new IllegalArgumentException("Illegal Load factor: "
0311:                            + loadFactor);
0312:                }
0313:
0314:                this .loadFactor = loadFactor;
0315:
0316:                int cap = p2capacity(initialCapacity);
0317:                table = new Entry[cap];
0318:                threshold = (int) (cap * loadFactor);
0319:            }
0320:
0321:            /**
0322:             * Constructs a new, empty map with the specified initial capacity and default load factor.
0323:             *
0324:             * @param   initialCapacity   the initial capacity of the
0325:             *                            AbstractConcurrentReadCache.
0326:             * @throws    IllegalArgumentException if the initial maximum number
0327:             *              of elements is less
0328:             *              than zero.
0329:             */
0330:            public AbstractConcurrentReadCache(int initialCapacity) {
0331:                this (initialCapacity, DEFAULT_LOAD_FACTOR);
0332:            }
0333:
0334:            /**
0335:             * Constructs a new, empty map with a default initial capacity and load factor.
0336:             */
0337:            public AbstractConcurrentReadCache() {
0338:                this (DEFAULT_INITIAL_CAPACITY, DEFAULT_LOAD_FACTOR);
0339:            }
0340:
0341:            /**
0342:             * Constructs a new map with the same mappings as the given map.
0343:             * The map is created with a capacity of twice the number of mappings in
0344:             * the given map or 11 (whichever is greater), and a default load factor.
0345:             */
0346:            public AbstractConcurrentReadCache(Map t) {
0347:                this (Math.max(2 * t.size(), 11), DEFAULT_LOAD_FACTOR);
0348:                putAll(t);
0349:            }
0350:
0351:            /**
0352:             * Returns <tt>true</tt> if this map contains no key-value mappings.
0353:             *
0354:             * @return <tt>true</tt> if this map contains no key-value mappings.
0355:             */
0356:            public synchronized boolean isEmpty() {
0357:                return count == 0;
0358:            }
0359:
0360:            /**
0361:             * Returns a set of the cache keys that reside in a particular group.
0362:             *
0363:             * @param   groupName The name of the group to retrieve.
0364:             * @return  a set containing all of the keys of cache entries that belong
0365:             * to this group, or <code>null</code> if the group was not found.
0366:             * @exception  NullPointerException if the groupName is <code>null</code>.
0367:             */
0368:            public Set getGroup(String groupName) {
0369:                if (log.isDebugEnabled()) {
0370:                    log.debug("getGroup called (group=" + groupName + ")");
0371:                }
0372:
0373:                Set groupEntries = null;
0374:
0375:                if (memoryCaching && (groups != null)) {
0376:                    groupEntries = (Set) getGroupForReading(groupName);
0377:                }
0378:
0379:                if (groupEntries == null) {
0380:                    // Not in the map, try the persistence layer
0381:                    groupEntries = persistRetrieveGroup(groupName);
0382:                }
0383:
0384:                return groupEntries;
0385:            }
0386:
0387:            /**
0388:             * Set the cache capacity
0389:             */
0390:            public void setMaxEntries(int newLimit) {
0391:                if (newLimit > 0) {
0392:                    maxEntries = newLimit;
0393:
0394:                    synchronized (this ) { // because remove() isn't synchronized
0395:
0396:                        while (size() > maxEntries) {
0397:                            remove(removeItem(), false, false);
0398:                        }
0399:                    }
0400:                } else {
0401:                    // Capacity must be at least 1
0402:                    throw new IllegalArgumentException(
0403:                            "Cache maximum number of entries must be at least 1");
0404:                }
0405:            }
0406:
0407:            /**
0408:             * Retrieve the cache capacity (number of entries).
0409:             */
0410:            public int getMaxEntries() {
0411:                return maxEntries;
0412:            }
0413:
0414:            /**
0415:             * Sets the memory caching flag.
0416:             */
0417:            public void setMemoryCaching(boolean memoryCaching) {
0418:                this .memoryCaching = memoryCaching;
0419:            }
0420:
0421:            /**
0422:             * Check if memory caching is used.
0423:             */
0424:            public boolean isMemoryCaching() {
0425:                return memoryCaching;
0426:            }
0427:
0428:            /**
0429:             * Set the persistence listener to use.
0430:             */
0431:            public void setPersistenceListener(PersistenceListener listener) {
0432:                this .persistenceListener = listener;
0433:            }
0434:
0435:            /**
0436:             * Get the persistence listener.
0437:             */
0438:            public PersistenceListener getPersistenceListener() {
0439:                return persistenceListener;
0440:            }
0441:
0442:            /**
0443:             * Sets the unlimited disk caching flag.
0444:             */
0445:            public void setUnlimitedDiskCache(boolean unlimitedDiskCache) {
0446:                this .unlimitedDiskCache = unlimitedDiskCache;
0447:            }
0448:
0449:            /**
0450:             * Check if we use unlimited disk cache.
0451:             */
0452:            public boolean isUnlimitedDiskCache() {
0453:                return unlimitedDiskCache;
0454:            }
0455:
0456:            /**
0457:             * Check if we use overflowPersistence
0458:             *
0459:             * @return Returns the overflowPersistence.
0460:             */
0461:            public boolean isOverflowPersistence() {
0462:                return this .overflowPersistence;
0463:            }
0464:
0465:            /**
0466:             * Sets the overflowPersistence flag
0467:             *
0468:             * @param overflowPersistence The overflowPersistence to set.
0469:             */
0470:            public void setOverflowPersistence(boolean overflowPersistence) {
0471:                this .overflowPersistence = overflowPersistence;
0472:            }
0473:
0474:            /**
0475:             * Return the number of slots in this table.
0476:             **/
0477:            public synchronized int capacity() {
0478:                return table.length;
0479:            }
0480:
0481:            /**
0482:             * Removes all mappings from this map.
0483:             */
0484:            public synchronized void clear() {
0485:                Entry[] tab = table;
0486:
0487:                for (int i = 0; i < tab.length; ++i) {
0488:                    // must invalidate all to force concurrent get's to wait and then retry
0489:                    for (Entry e = tab[i]; e != null; e = e.next) {
0490:                        e.value = null;
0491:
0492:                        /** OpenSymphony BEGIN */
0493:                        itemRemoved(e.key);
0494:
0495:                        /** OpenSymphony END */
0496:                    }
0497:
0498:                    tab[i] = null;
0499:                }
0500:
0501:                // Clean out the entire disk cache
0502:                persistClear();
0503:
0504:                count = 0;
0505:                recordModification(tab);
0506:            }
0507:
0508:            /**
0509:             * Returns a shallow copy of this.
0510:             * <tt>AbstractConcurrentReadCache</tt> instance: the keys and
0511:             * values themselves are not cloned.
0512:             *
0513:             * @return a shallow copy of this map.
0514:             */
0515:            public synchronized Object clone() {
0516:                try {
0517:                    AbstractConcurrentReadCache t = (AbstractConcurrentReadCache) super 
0518:                            .clone();
0519:                    t.keySet = null;
0520:                    t.entrySet = null;
0521:                    t.values = null;
0522:
0523:                    Entry[] tab = table;
0524:                    t.table = new Entry[tab.length];
0525:
0526:                    Entry[] ttab = t.table;
0527:
0528:                    for (int i = 0; i < tab.length; ++i) {
0529:                        Entry first = tab[i];
0530:
0531:                        if (first != null) {
0532:                            ttab[i] = (Entry) (first.clone());
0533:                        }
0534:                    }
0535:
0536:                    return t;
0537:                } catch (CloneNotSupportedException e) {
0538:                    // this shouldn't happen, since we are Cloneable
0539:                    throw new InternalError();
0540:                }
0541:            }
0542:
0543:            /**
0544:             * Tests if some key maps into the specified value in this table.
0545:             * This operation is more expensive than the <code>containsKey</code>
0546:             * method.<p>
0547:             *
0548:             * Note that this method is identical in functionality to containsValue,
0549:             * (which is part of the Map interface in the collections framework).
0550:             *
0551:             * @param      value   a value to search for.
0552:             * @return     <code>true</code> if and only if some key maps to the
0553:             *             <code>value</code> argument in this table as
0554:             *             determined by the <tt>equals</tt> method;
0555:             *             <code>false</code> otherwise.
0556:             * @exception  NullPointerException  if the value is <code>null</code>.
0557:             * @see        #containsKey(Object)
0558:             * @see        #containsValue(Object)
0559:             * @see           Map
0560:             */
0561:            public boolean contains(Object value) {
0562:                return containsValue(value);
0563:            }
0564:
0565:            /**
0566:             * Tests if the specified object is a key in this table.
0567:             *
0568:             * @param   key   possible key.
0569:             * @return  <code>true</code> if and only if the specified object
0570:             *          is a key in this table, as determined by the
0571:             *          <tt>equals</tt> method; <code>false</code> otherwise.
0572:             * @exception  NullPointerException  if the key is
0573:             *               <code>null</code>.
0574:             * @see     #contains(Object)
0575:             */
0576:            public boolean containsKey(Object key) {
0577:                return get(key) != null;
0578:
0579:                /** OpenSymphony BEGIN */
0580:
0581:                // TODO: Also check the persistence?
0582:                /** OpenSymphony END */
0583:            }
0584:
0585:            /**
0586:             * Returns <tt>true</tt> if this map maps one or more keys to the
0587:             * specified value. Note: This method requires a full internal
0588:             * traversal of the hash table, and so is much slower than
0589:             * method <tt>containsKey</tt>.
0590:             *
0591:             * @param value value whose presence in this map is to be tested.
0592:             * @return <tt>true</tt> if this map maps one or more keys to the
0593:             * specified value.
0594:             * @exception  NullPointerException  if the value is <code>null</code>.
0595:             */
0596:            public boolean containsValue(Object value) {
0597:                if (value == null) {
0598:                    throw new NullPointerException();
0599:                }
0600:
0601:                Entry[] tab = getTableForReading();
0602:
0603:                for (int i = 0; i < tab.length; ++i) {
0604:                    for (Entry e = tab[i]; e != null; e = e.next) {
0605:                        Object v = e.value;
0606:
0607:                        if ((v != null) && value.equals(v)) {
0608:                            return true;
0609:                        }
0610:                    }
0611:                }
0612:
0613:                return false;
0614:            }
0615:
0616:            /**
0617:             * Returns an enumeration of the values in this table.
0618:             * Use the Enumeration methods on the returned object to fetch the elements
0619:             * sequentially.
0620:             *
0621:             * @return  an enumeration of the values in this table.
0622:             * @see     java.util.Enumeration
0623:             * @see     #keys()
0624:             * @see        #values()
0625:             * @see        Map
0626:             */
0627:            public Enumeration elements() {
0628:                return new ValueIterator();
0629:            }
0630:
0631:            /**
0632:             * Returns a collection view of the mappings contained in this map.
0633:             * Each element in the returned collection is a <tt>Map.Entry</tt>.  The
0634:             * collection is backed by the map, so changes to the map are reflected in
0635:             * the collection, and vice-versa.  The collection supports element
0636:             * removal, which removes the corresponding mapping from the map, via the
0637:             * <tt>Iterator.remove</tt>, <tt>Collection.remove</tt>,
0638:             * <tt>removeAll</tt>, <tt>retainAll</tt>, and <tt>clear</tt> operations.
0639:             * It does not support the <tt>add</tt> or <tt>addAll</tt> operations.
0640:             *
0641:             * @return a collection view of the mappings contained in this map.
0642:             */
0643:            public Set entrySet() {
0644:                Set es = entrySet;
0645:
0646:                if (es != null) {
0647:                    return es;
0648:                } else {
0649:                    return entrySet = new AbstractSet() {
0650:                        public Iterator iterator() {
0651:                            return new HashIterator();
0652:                        }
0653:
0654:                        public boolean contains(Object o) {
0655:                            if (!(o instanceof  Map.Entry)) {
0656:                                return false;
0657:                            }
0658:
0659:                            Map.Entry entry = (Map.Entry) o;
0660:                            Object key = entry.getKey();
0661:                            Object v = AbstractConcurrentReadCache.this 
0662:                                    .get(key);
0663:
0664:                            return (v != null) && v.equals(entry.getValue());
0665:                        }
0666:
0667:                        public boolean remove(Object o) {
0668:                            if (!(o instanceof  Map.Entry)) {
0669:                                return false;
0670:                            }
0671:
0672:                            return AbstractConcurrentReadCache.this 
0673:                                    .findAndRemoveEntry((Map.Entry) o);
0674:                        }
0675:
0676:                        public int size() {
0677:                            return AbstractConcurrentReadCache.this .size();
0678:                        }
0679:
0680:                        public void clear() {
0681:                            AbstractConcurrentReadCache.this .clear();
0682:                        }
0683:                    };
0684:                }
0685:            }
0686:
0687:            /**
0688:             * Returns the value to which the specified key is mapped in this table.
0689:             *
0690:             * @param   key   a key in the table.
0691:             * @return  the value to which the key is mapped in this table;
0692:             *          <code>null</code> if the key is not mapped to any value in
0693:             *          this table.
0694:             * @exception  NullPointerException  if the key is
0695:             *               <code>null</code>.
0696:             * @see     #put(Object, Object)
0697:             */
0698:            public Object get(Object key) {
0699:                if (log.isDebugEnabled()) {
0700:                    log.debug("get called (key=" + key + ")");
0701:                }
0702:
0703:                // throw null pointer exception if key null
0704:                int hash = hash(key);
0705:
0706:                /*
0707:                   Start off at the apparently correct bin.  If entry is found, we
0708:                   need to check after a barrier anyway.  If not found, we need a
0709:                   barrier to check if we are actually in right bin. So either
0710:                   way, we encounter only one barrier unless we need to retry.
0711:                   And we only need to fully synchronize if there have been
0712:                   concurrent modifications.
0713:                 */
0714:                Entry[] tab = table;
0715:                int index = hash & (tab.length - 1);
0716:                Entry first = tab[index];
0717:                Entry e = first;
0718:
0719:                for (;;) {
0720:                    if (e == null) {
0721:                        // If key apparently not there, check to
0722:                        // make sure this was a valid read
0723:                        tab = getTableForReading();
0724:
0725:                        if (first == tab[index]) {
0726:                            /** OpenSymphony BEGIN */
0727:
0728:                            /* Previous code
0729:                            return null;*/
0730:
0731:                            // Not in the table, try persistence
0732:                            Object value = persistRetrieve(key);
0733:
0734:                            if (value != null) {
0735:                                // Update the map, but don't persist the data
0736:                                put(key, value, false);
0737:                            }
0738:
0739:                            return value;
0740:
0741:                            /** OpenSymphony END */
0742:                        } else {
0743:                            // Wrong list -- must restart traversal at new first
0744:                            e = first = tab[index = hash & (tab.length - 1)];
0745:                        }
0746:                    }
0747:                    // checking for pointer equality first wins in most applications
0748:                    else if ((key == e.key)
0749:                            || ((e.hash == hash) && key.equals(e.key))) {
0750:                        Object value = e.value;
0751:
0752:                        if (value != null) {
0753:                            /** OpenSymphony BEGIN */
0754:
0755:                            /* Previous code
0756:                            return value;*/
0757:                            if (NULL.equals(value)) {
0758:                                // Memory cache disable, use disk
0759:                                value = persistRetrieve(e.key);
0760:
0761:                                if (value != null) {
0762:                                    itemRetrieved(key);
0763:                                }
0764:
0765:                                return value; // fix [CACHE-13]
0766:                            } else {
0767:                                itemRetrieved(key);
0768:
0769:                                return value;
0770:                            }
0771:
0772:                            /** OpenSymphony END */
0773:                        }
0774:
0775:                        // Entry was invalidated during deletion. But it could
0776:                        // have been re-inserted, so we must retraverse.
0777:                        // To avoid useless contention, get lock to wait out modifications
0778:                        // before retraversing.
0779:                        synchronized (this ) {
0780:                            tab = table;
0781:                        }
0782:
0783:                        e = first = tab[index = hash & (tab.length - 1)];
0784:                    } else {
0785:                        e = e.next;
0786:                    }
0787:                }
0788:            }
0789:
0790:            /**
0791:             * Returns a set view of the keys contained in this map.
0792:             * The set is backed by the map, so changes to the map are reflected in the set, and
0793:             * vice-versa.  The set supports element removal, which removes the
0794:             * corresponding mapping from this map, via the <tt>Iterator.remove</tt>,
0795:             * <tt>Set.remove</tt>, <tt>removeAll</tt>, <tt>retainAll</tt>, and
0796:             * <tt>clear</tt> operations.  It does not support the <tt>add</tt> or
0797:             * <tt>addAll</tt> operations.
0798:             *
0799:             * @return a set view of the keys contained in this map.
0800:             */
0801:            public Set keySet() {
0802:                Set ks = keySet;
0803:
0804:                if (ks != null) {
0805:                    return ks;
0806:                } else {
0807:                    return keySet = new AbstractSet() {
0808:                        public Iterator iterator() {
0809:                            return new KeyIterator();
0810:                        }
0811:
0812:                        public int size() {
0813:                            return AbstractConcurrentReadCache.this .size();
0814:                        }
0815:
0816:                        public boolean contains(Object o) {
0817:                            return AbstractConcurrentReadCache.this 
0818:                                    .containsKey(o);
0819:                        }
0820:
0821:                        public boolean remove(Object o) {
0822:                            return AbstractConcurrentReadCache.this .remove(o) != null;
0823:                        }
0824:
0825:                        public void clear() {
0826:                            AbstractConcurrentReadCache.this .clear();
0827:                        }
0828:                    };
0829:                }
0830:            }
0831:
0832:            /**
0833:             * Returns an enumeration of the keys in this table.
0834:             *
0835:             * @return  an enumeration of the keys in this table.
0836:             * @see     Enumeration
0837:             * @see     #elements()
0838:             * @see        #keySet()
0839:             * @see        Map
0840:             */
0841:            public Enumeration keys() {
0842:                return new KeyIterator();
0843:            }
0844:
0845:            /**
0846:             * Return the load factor
0847:             **/
0848:            public float loadFactor() {
0849:                return loadFactor;
0850:            }
0851:
0852:            /**
0853:             * Maps the specified <code>key</code> to the specified <code>value</code> in this table.
0854:             * Neither the key nor the
0855:             * value can be <code>null</code>. <p>
0856:             *
0857:             * The value can be retrieved by calling the <code>get</code> method
0858:             * with a key that is equal to the original key.
0859:             *
0860:             * @param      key     the table key.
0861:             * @param      value   the value.
0862:             * @return     the previous value of the specified key in this table,
0863:             *             or <code>null</code> if it did not have one.
0864:             * @exception  NullPointerException  if the key or value is
0865:             *               <code>null</code>.
0866:             * @see     Object#equals(Object)
0867:             * @see     #get(Object)
0868:             */
0869:            /** OpenSymphony BEGIN */
0870:            public Object put(Object key, Object value) {
0871:                // Call the internal put using persistance
0872:                return put(key, value, true);
0873:            }
0874:
0875:            /**
0876:             * Copies all of the mappings from the specified map to this one.
0877:             *
0878:             * These mappings replace any mappings that this map had for any of the
0879:             * keys currently in the specified Map.
0880:             *
0881:             * @param t Mappings to be stored in this map.
0882:             */
0883:            public synchronized void putAll(Map t) {
0884:                for (Iterator it = t.entrySet().iterator(); it.hasNext();) {
0885:                    Map.Entry entry = (Map.Entry) it.next();
0886:                    Object key = entry.getKey();
0887:                    Object value = entry.getValue();
0888:                    put(key, value);
0889:                }
0890:            }
0891:
0892:            /**
0893:             * Removes the key (and its corresponding value) from this table.
0894:             * This method does nothing if the key is not in the table.
0895:             *
0896:             * @param   key   the key that needs to be removed.
0897:             * @return  the value to which the key had been mapped in this table,
0898:             *          or <code>null</code> if the key did not have a mapping.
0899:             */
0900:            /** OpenSymphony BEGIN */
0901:            public Object remove(Object key) {
0902:                return remove(key, true, false);
0903:            }
0904:
0905:            /**
0906:             * Like <code>remove(Object)</code>, but ensures that the entry will be removed from the persistent store, too,
0907:             * even if overflowPersistence or unlimitedDiskcache are true.
0908:             *
0909:             * @param   key   the key that needs to be removed.
0910:             * @return  the value to which the key had been mapped in this table,
0911:             *          or <code>null</code> if the key did not have a mapping.
0912:             */
0913:            public Object removeForce(Object key) {
0914:                return remove(key, true, true);
0915:            }
0916:
0917:            /**
0918:             * Returns the total number of cache entries held in this map.
0919:             *
0920:             * @return the number of key-value mappings in this map.
0921:             */
0922:            public synchronized int size() {
0923:                return count;
0924:            }
0925:
0926:            /**
0927:             * Returns a collection view of the values contained in this map.
0928:             * The collection is backed by the map, so changes to the map are reflected in
0929:             * the collection, and vice-versa.  The collection supports element
0930:             * removal, which removes the corresponding mapping from this map, via the
0931:             * <tt>Iterator.remove</tt>, <tt>Collection.remove</tt>,
0932:             * <tt>removeAll</tt>, <tt>retainAll</tt>, and <tt>clear</tt> operations.
0933:             * It does not support the <tt>add</tt> or <tt>addAll</tt> operations.
0934:             *
0935:             * @return a collection view of the values contained in this map.
0936:             */
0937:            public Collection values() {
0938:                Collection vs = values;
0939:
0940:                if (vs != null) {
0941:                    return vs;
0942:                } else {
0943:                    return values = new AbstractCollection() {
0944:                        public Iterator iterator() {
0945:                            return new ValueIterator();
0946:                        }
0947:
0948:                        public int size() {
0949:                            return AbstractConcurrentReadCache.this .size();
0950:                        }
0951:
0952:                        public boolean contains(Object o) {
0953:                            return AbstractConcurrentReadCache.this 
0954:                                    .containsValue(o);
0955:                        }
0956:
0957:                        public void clear() {
0958:                            AbstractConcurrentReadCache.this .clear();
0959:                        }
0960:                    };
0961:                }
0962:            }
0963:
0964:            /**
0965:             * Get ref to group.
0966:             * CACHE-127 Synchronized copying of the group entry set since
0967:             * the new HashSet(Collection c) constructor uses the iterator.
0968:             * This may slow things down but it is better than a
0969:             * ConcurrentModificationException.  We might have to revisit the
0970:             * code if performance is too adversely impacted.
0971:             **/
0972:            protected synchronized final Set getGroupForReading(String groupName) {
0973:                Set group = (Set) getGroupsForReading().get(groupName);
0974:                if (group == null)
0975:                    return null;
0976:                return new HashSet(group);
0977:            }
0978:
0979:            /**
0980:             * Get ref to groups.
0981:             * The reference and the cells it
0982:             * accesses will be at least as fresh as from last
0983:             * use of barrierLock
0984:             **/
0985:            protected final Map getGroupsForReading() {
0986:                synchronized (barrierLock) {
0987:                    return groups;
0988:                }
0989:            }
0990:
0991:            /**
0992:             * Get ref to table; the reference and the cells it
0993:             * accesses will be at least as fresh as from last
0994:             * use of barrierLock
0995:             **/
0996:            protected final Entry[] getTableForReading() {
0997:                synchronized (barrierLock) {
0998:                    return table;
0999:                }
1000:            }
1001:
1002:            /**
1003:             * Force a memory synchronization that will cause
1004:             * all readers to see table. Call only when already
1005:             * holding main synch lock.
1006:             **/
1007:            protected final void recordModification(Object x) {
1008:                synchronized (barrierLock) {
1009:                    lastWrite = x;
1010:                }
1011:            }
1012:
1013:            /**
1014:             * Helper method for entrySet remove.
1015:             **/
1016:            protected synchronized boolean findAndRemoveEntry(Map.Entry entry) {
1017:                Object key = entry.getKey();
1018:                Object v = get(key);
1019:
1020:                if ((v != null) && v.equals(entry.getValue())) {
1021:                    remove(key);
1022:
1023:                    return true;
1024:                } else {
1025:                    return false;
1026:                }
1027:            }
1028:
1029:            /**
1030:             * Remove an object from the persistence.
1031:             * @param key The key of the object to remove
1032:             */
1033:            protected void persistRemove(Object key) {
1034:                if (log.isDebugEnabled()) {
1035:                    log.debug("PersistRemove called (key=" + key + ")");
1036:                }
1037:
1038:                if (persistenceListener != null) {
1039:                    try {
1040:                        persistenceListener.remove((String) key);
1041:                    } catch (CachePersistenceException e) {
1042:                        log.error(
1043:                                "[oscache] Exception removing cache entry with key '"
1044:                                        + key + "' from persistence", e);
1045:                    }
1046:                }
1047:            }
1048:
1049:            /**
1050:             * Removes a cache group using the persistence listener.
1051:             * @param groupName The name of the group to remove
1052:             */
1053:            protected void persistRemoveGroup(String groupName) {
1054:                if (log.isDebugEnabled()) {
1055:                    log.debug("persistRemoveGroup called (groupName="
1056:                            + groupName + ")");
1057:                }
1058:
1059:                if (persistenceListener != null) {
1060:                    try {
1061:                        persistenceListener.removeGroup(groupName);
1062:                    } catch (CachePersistenceException e) {
1063:                        log.error("[oscache] Exception removing group "
1064:                                + groupName, e);
1065:                    }
1066:                }
1067:            }
1068:
1069:            /**
1070:             * Retrieve an object from the persistence listener.
1071:             * @param key The key of the object to retrieve
1072:             */
1073:            protected Object persistRetrieve(Object key) {
1074:                if (log.isDebugEnabled()) {
1075:                    log.debug("persistRetrieve called (key=" + key + ")");
1076:                }
1077:
1078:                Object entry = null;
1079:
1080:                if (persistenceListener != null) {
1081:                    try {
1082:                        entry = persistenceListener.retrieve((String) key);
1083:                    } catch (CachePersistenceException e) {
1084:                        /**
1085:                         * It is normal that we get an exception occasionally.
1086:                         * It happens when the item is invalidated (written or removed)
1087:                         * during read. The logic is constructed so that read is retried.
1088:                         */
1089:                    }
1090:                }
1091:
1092:                return entry;
1093:            }
1094:
1095:            /**
1096:             * Retrieves a cache group using the persistence listener.
1097:             * @param groupName The name of the group to retrieve
1098:             */
1099:            protected Set persistRetrieveGroup(String groupName) {
1100:                if (log.isDebugEnabled()) {
1101:                    log.debug("persistRetrieveGroup called (groupName="
1102:                            + groupName + ")");
1103:                }
1104:
1105:                if (persistenceListener != null) {
1106:                    try {
1107:                        return persistenceListener.retrieveGroup(groupName);
1108:                    } catch (CachePersistenceException e) {
1109:                        log.error("[oscache] Exception retrieving group "
1110:                                + groupName, e);
1111:                    }
1112:                }
1113:
1114:                return null;
1115:            }
1116:
1117:            /**
1118:             * Store an object in the cache using the persistence listener.
1119:             * @param key The object key
1120:             * @param obj The object to store
1121:             */
1122:            protected void persistStore(Object key, Object obj) {
1123:                if (log.isDebugEnabled()) {
1124:                    log.debug("persistStore called (key=" + key + ")");
1125:                }
1126:
1127:                if (persistenceListener != null) {
1128:                    try {
1129:                        persistenceListener.store((String) key, obj);
1130:                    } catch (CachePersistenceException e) {
1131:                        log.error("[oscache] Exception persisting " + key, e);
1132:                    }
1133:                }
1134:            }
1135:
1136:            /**
1137:             * Creates or Updates a cache group using the persistence listener.
1138:             * @param groupName The name of the group to update
1139:             * @param group The entries for the group
1140:             */
1141:            protected void persistStoreGroup(String groupName, Set group) {
1142:                if (log.isDebugEnabled()) {
1143:                    log.debug("persistStoreGroup called (groupName="
1144:                            + groupName + ")");
1145:                }
1146:
1147:                if (persistenceListener != null) {
1148:                    try {
1149:                        if ((group == null) || group.isEmpty()) {
1150:                            persistenceListener.removeGroup(groupName);
1151:                        } else {
1152:                            persistenceListener.storeGroup(groupName, group);
1153:                        }
1154:                    } catch (CachePersistenceException e) {
1155:                        log.error("[oscache] Exception persisting group "
1156:                                + groupName, e);
1157:                    }
1158:                }
1159:            }
1160:
1161:            /**
1162:             * Removes the entire cache from persistent storage.
1163:             */
1164:            protected void persistClear() {
1165:                if (log.isDebugEnabled()) {
1166:                    log.debug("persistClear called");
1167:                    ;
1168:                }
1169:
1170:                if (persistenceListener != null) {
1171:                    try {
1172:                        persistenceListener.clear();
1173:                    } catch (CachePersistenceException e) {
1174:                        log
1175:                                .error(
1176:                                        "[oscache] Exception clearing persistent cache",
1177:                                        e);
1178:                    }
1179:                }
1180:            }
1181:
1182:            /**
1183:             * Notify the underlying implementation that an item was put in the cache.
1184:             *
1185:             * @param key The cache key of the item that was put.
1186:             */
1187:            protected abstract void itemPut(Object key);
1188:
1189:            /**
1190:             * Notify any underlying algorithm that an item has been retrieved from the cache.
1191:             *
1192:             * @param key The cache key of the item that was retrieved.
1193:             */
1194:            protected abstract void itemRetrieved(Object key);
1195:
1196:            /**
1197:             * Notify the underlying implementation that an item was removed from the cache.
1198:             *
1199:             * @param key The cache key of the item that was removed.
1200:             */
1201:            protected abstract void itemRemoved(Object key);
1202:
1203:            /**
1204:             * The cache has reached its cacpacity and an item needs to be removed.
1205:             * (typically according to an algorithm such as LRU or FIFO).
1206:             *
1207:             * @return The key of whichever item was removed.
1208:             */
1209:            protected abstract Object removeItem();
1210:
1211:            /**
1212:             * Reconstitute the <tt>AbstractConcurrentReadCache</tt>.
1213:             * instance from a stream (i.e.,
1214:             * deserialize it).
1215:             */
1216:            private synchronized void readObject(java.io.ObjectInputStream s)
1217:                    throws IOException, ClassNotFoundException {
1218:                // Read in the threshold, loadfactor, and any hidden stuff
1219:                s.defaultReadObject();
1220:
1221:                // Read in number of buckets and allocate the bucket array;
1222:                int numBuckets = s.readInt();
1223:                table = new Entry[numBuckets];
1224:
1225:                // Read in size (number of Mappings)
1226:                int size = s.readInt();
1227:
1228:                // Read the keys and values, and put the mappings in the table
1229:                for (int i = 0; i < size; i++) {
1230:                    Object key = s.readObject();
1231:                    Object value = s.readObject();
1232:                    put(key, value);
1233:                }
1234:            }
1235:
1236:            /**
1237:             * Rehashes the contents of this map into a new table with a larger capacity.
1238:             * This method is called automatically when the
1239:             * number of keys in this map exceeds its capacity and load factor.
1240:             */
1241:            protected void rehash() {
1242:                Entry[] oldMap = table;
1243:                int oldCapacity = oldMap.length;
1244:
1245:                if (oldCapacity >= MAXIMUM_CAPACITY) {
1246:                    return;
1247:                }
1248:
1249:                int newCapacity = oldCapacity << 1;
1250:                Entry[] newMap = new Entry[newCapacity];
1251:                threshold = (int) (newCapacity * loadFactor);
1252:
1253:                /*
1254:                  We need to guarantee that any existing reads of oldMap can
1255:                  proceed. So we cannot yet null out each oldMap bin.
1256:
1257:                  Because we are using power-of-two expansion, the elements
1258:                  from each bin must either stay at same index, or move
1259:                  to oldCapacity+index. We also minimize new node creation by
1260:                  catching cases where old nodes can be reused because their
1261:                  .next fields won't change. (This is checked only for sequences
1262:                  of one and two. It is not worth checking longer ones.)
1263:                 */
1264:                for (int i = 0; i < oldCapacity; ++i) {
1265:                    Entry l = null;
1266:                    Entry h = null;
1267:                    Entry e = oldMap[i];
1268:
1269:                    while (e != null) {
1270:                        int hash = e.hash;
1271:                        Entry next = e.next;
1272:
1273:                        if ((hash & oldCapacity) == 0) {
1274:                            // stays at newMap[i]
1275:                            if (l == null) {
1276:                                // try to reuse node
1277:                                if ((next == null)
1278:                                        || ((next.next == null) && ((next.hash & oldCapacity) == 0))) {
1279:                                    l = e;
1280:
1281:                                    break;
1282:                                }
1283:                            }
1284:
1285:                            l = new Entry(hash, e.key, e.value, l);
1286:                        } else {
1287:                            // moves to newMap[oldCapacity+i]
1288:                            if (h == null) {
1289:                                if ((next == null)
1290:                                        || ((next.next == null) && ((next.hash & oldCapacity) != 0))) {
1291:                                    h = e;
1292:
1293:                                    break;
1294:                                }
1295:                            }
1296:
1297:                            h = new Entry(hash, e.key, e.value, h);
1298:                        }
1299:
1300:                        e = next;
1301:                    }
1302:
1303:                    newMap[i] = l;
1304:                    newMap[oldCapacity + i] = h;
1305:                }
1306:
1307:                table = newMap;
1308:                recordModification(newMap);
1309:            }
1310:
1311:            /**
1312:             * Continuation of put(), called only when synch lock is
1313:             * held and interference has been detected.
1314:             **/
1315:            /** OpenSymphony BEGIN */
1316:
1317:            /* Previous code
1318:            protected Object sput(Object key, Object value, int hash) {*/
1319:            protected Object sput(Object key, Object value, int hash,
1320:                    boolean persist) {
1321:                /** OpenSymphony END */
1322:                Entry[] tab = table;
1323:                int index = hash & (tab.length - 1);
1324:                Entry first = tab[index];
1325:                Entry e = first;
1326:
1327:                for (;;) {
1328:                    if (e == null) {
1329:                        /** OpenSymphony BEGIN */
1330:
1331:                        // Previous code
1332:                        //  		Entry newEntry = new Entry(hash, key, value, first);
1333:                        Entry newEntry;
1334:
1335:                        if (memoryCaching) {
1336:                            newEntry = new Entry(hash, key, value, first);
1337:                        } else {
1338:                            newEntry = new Entry(hash, key, NULL, first);
1339:                        }
1340:
1341:                        itemPut(key);
1342:
1343:                        // Persist if required
1344:                        if (persist && !overflowPersistence) {
1345:                            persistStore(key, value);
1346:                        }
1347:
1348:                        // If we have a CacheEntry, update the group lookups
1349:                        if (value instanceof  CacheEntry) {
1350:                            updateGroups(null, (CacheEntry) value, persist);
1351:                        }
1352:
1353:                        /**        OpenSymphony END */
1354:                        tab[index] = newEntry;
1355:
1356:                        if (++count >= threshold) {
1357:                            rehash();
1358:                        } else {
1359:                            recordModification(newEntry);
1360:                        }
1361:
1362:                        return null;
1363:                    } else if ((key == e.key)
1364:                            || ((e.hash == hash) && key.equals(e.key))) {
1365:                        Object oldValue = e.value;
1366:
1367:                        /** OpenSymphony BEGIN */
1368:
1369:                        /* Previous code
1370:                        e.value = value; */
1371:                        if (memoryCaching) {
1372:                            e.value = value;
1373:                        }
1374:
1375:                        // Persist if required
1376:                        if (persist && overflowPersistence) {
1377:                            persistRemove(key);
1378:                        } else if (persist) {
1379:                            persistStore(key, value);
1380:                        }
1381:
1382:                        updateGroups(oldValue, value, persist);
1383:
1384:                        itemPut(key);
1385:
1386:                        /** OpenSymphony END */
1387:                        return oldValue;
1388:                    } else {
1389:                        e = e.next;
1390:                    }
1391:                }
1392:            }
1393:
1394:            /**
1395:             * Continuation of remove(), called only when synch lock is
1396:             * held and interference has been detected.
1397:             **/
1398:            /** OpenSymphony BEGIN */
1399:
1400:            /* Previous code
1401:            protected Object sremove(Object key, int hash) { */
1402:            protected Object sremove(Object key, int hash,
1403:                    boolean invokeAlgorithm) {
1404:                /** OpenSymphony END */
1405:                Entry[] tab = table;
1406:                int index = hash & (tab.length - 1);
1407:                Entry first = tab[index];
1408:                Entry e = first;
1409:
1410:                for (;;) {
1411:                    if (e == null) {
1412:                        return null;
1413:                    } else if ((key == e.key)
1414:                            || ((e.hash == hash) && key.equals(e.key))) {
1415:                        Object oldValue = e.value;
1416:                        if (persistenceListener != null && (oldValue == NULL)) {
1417:                            oldValue = persistRetrieve(key);
1418:                        }
1419:
1420:                        e.value = null;
1421:                        count--;
1422:
1423:                        /** OpenSymphony BEGIN */
1424:                        if (!unlimitedDiskCache && !overflowPersistence) {
1425:                            persistRemove(e.key);
1426:                            // If we have a CacheEntry, update the groups
1427:                            if (oldValue instanceof  CacheEntry) {
1428:                                CacheEntry oldEntry = (CacheEntry) oldValue;
1429:                                removeGroupMappings(oldEntry.getKey(), oldEntry
1430:                                        .getGroups(), true);
1431:                            }
1432:                        } else {
1433:                            // only remove from memory groups
1434:                            if (oldValue instanceof  CacheEntry) {
1435:                                CacheEntry oldEntry = (CacheEntry) oldValue;
1436:                                removeGroupMappings(oldEntry.getKey(), oldEntry
1437:                                        .getGroups(), false);
1438:                            }
1439:                        }
1440:
1441:                        if (overflowPersistence && ((size() + 1) >= maxEntries)) {
1442:                            persistStore(key, oldValue);
1443:                            // add key to persistent groups but NOT to the memory groups
1444:                            if (oldValue instanceof  CacheEntry) {
1445:                                CacheEntry oldEntry = (CacheEntry) oldValue;
1446:                                addGroupMappings(oldEntry.getKey(), oldEntry
1447:                                        .getGroups(), true, false);
1448:                            }
1449:                        }
1450:
1451:                        if (invokeAlgorithm) {
1452:                            itemRemoved(key);
1453:                        }
1454:
1455:                        /** OpenSymphony END */
1456:                        Entry head = e.next;
1457:
1458:                        for (Entry p = first; p != e; p = p.next) {
1459:                            head = new Entry(p.hash, p.key, p.value, head);
1460:                        }
1461:
1462:                        tab[index] = head;
1463:                        recordModification(head);
1464:
1465:                        return oldValue;
1466:                    } else {
1467:                        e = e.next;
1468:                    }
1469:                }
1470:            }
1471:
1472:            /**
1473:             * Save the state of the <tt>AbstractConcurrentReadCache</tt> instance to a stream.
1474:             * (i.e., serialize it).
1475:             *
1476:             * @serialData The <i>capacity</i> of the
1477:             * AbstractConcurrentReadCache (the length of the
1478:             * bucket array) is emitted (int), followed  by the
1479:             * <i>size</i> of the AbstractConcurrentReadCache (the number of key-value
1480:             * mappings), followed by the key (Object) and value (Object)
1481:             * for each key-value mapping represented by the AbstractConcurrentReadCache
1482:             * The key-value mappings are emitted in no particular order.
1483:             */
1484:            private synchronized void writeObject(java.io.ObjectOutputStream s)
1485:                    throws IOException {
1486:                // Write out the threshold, loadfactor, and any hidden stuff
1487:                s.defaultWriteObject();
1488:
1489:                // Write out number of buckets
1490:                s.writeInt(table.length);
1491:
1492:                // Write out size (number of Mappings)
1493:                s.writeInt(count);
1494:
1495:                // Write out keys and values (alternating)
1496:                for (int index = table.length - 1; index >= 0; index--) {
1497:                    Entry entry = table[index];
1498:
1499:                    while (entry != null) {
1500:                        s.writeObject(entry.key);
1501:                        s.writeObject(entry.value);
1502:                        entry = entry.next;
1503:                    }
1504:                }
1505:            }
1506:
1507:            /**
1508:             * Return hash code for Object x.
1509:             * Since we are using power-of-two
1510:             * tables, it is worth the effort to improve hashcode via
1511:             * the same multiplicative scheme as used in IdentityHashMap.
1512:             */
1513:            private static int hash(Object x) {
1514:                int h = x.hashCode();
1515:
1516:                // Multiply by 127 (quickly, via shifts), and mix in some high
1517:                // bits to help guard against bunching of codes that are
1518:                // consecutive or equally spaced.
1519:                return ((h << 7) - h + (h >>> 9) + (h >>> 17));
1520:            }
1521:
1522:            /**
1523:             * Add this cache key to the groups specified groups.
1524:             * We have to treat the
1525:             * memory and disk group mappings seperately so they remain valid for their
1526:             * corresponding memory/disk caches. (eg if mem is limited to 100 entries
1527:             * and disk is unlimited, the group mappings will be different).
1528:             *
1529:             * @param key The cache key that we are ading to the groups.
1530:             * @param newGroups the set of groups we want to add this cache entry to.
1531:             * @param persist A flag to indicate whether the keys should be added to
1532:             * the persistent cache layer.
1533:             * @param memory A flag to indicate whether the key should be added to
1534:             * the memory groups (important for overflow-to-disk)
1535:             */
1536:            private void addGroupMappings(String key, Set newGroups,
1537:                    boolean persist, boolean memory) {
1538:                if (newGroups == null) {
1539:                    return;
1540:                }
1541:
1542:                // Add this CacheEntry to the groups that it is now a member of
1543:                for (Iterator it = newGroups.iterator(); it.hasNext();) {
1544:                    String groupName = (String) it.next();
1545:
1546:                    // Update the in-memory groups
1547:                    if (memoryCaching && memory) {
1548:                        if (groups == null) {
1549:                            groups = new HashMap();
1550:                        }
1551:
1552:                        Set memoryGroup = (Set) groups.get(groupName);
1553:
1554:                        if (memoryGroup == null) {
1555:                            memoryGroup = new HashSet();
1556:                            groups.put(groupName, memoryGroup);
1557:                        }
1558:
1559:                        memoryGroup.add(key);
1560:                    }
1561:
1562:                    // Update the persistent group maps
1563:                    if (persist) {
1564:                        Set persistentGroup = persistRetrieveGroup(groupName);
1565:
1566:                        if (persistentGroup == null) {
1567:                            persistentGroup = new HashSet();
1568:                        }
1569:
1570:                        persistentGroup.add(key);
1571:                        persistStoreGroup(groupName, persistentGroup);
1572:                    }
1573:                }
1574:            }
1575:
1576:            /** OpenSymphony END (pretty long!) */
1577:            /**
1578:             * Returns the appropriate capacity (power of two) for the specified
1579:             * initial capacity argument.
1580:             */
1581:            private int p2capacity(int initialCapacity) {
1582:                int cap = initialCapacity;
1583:
1584:                // Compute the appropriate capacity
1585:                int result;
1586:
1587:                if ((cap > MAXIMUM_CAPACITY) || (cap < 0)) {
1588:                    result = MAXIMUM_CAPACITY;
1589:                } else {
1590:                    result = MINIMUM_CAPACITY;
1591:
1592:                    while (result < cap) {
1593:                        result <<= 1;
1594:                    }
1595:                }
1596:
1597:                return result;
1598:            }
1599:
1600:            /* Previous code
1601:            public Object put(Object key, Object value)*/
1602:            private Object put(Object key, Object value, boolean persist) {
1603:                /** OpenSymphony END */
1604:                if (value == null) {
1605:                    throw new NullPointerException();
1606:                }
1607:
1608:                int hash = hash(key);
1609:                Entry[] tab = table;
1610:                int index = hash & (tab.length - 1);
1611:                Entry first = tab[index];
1612:                Entry e = first;
1613:
1614:                for (;;) {
1615:                    if (e == null) {
1616:                        synchronized (this ) {
1617:                            tab = table;
1618:
1619:                            /** OpenSymphony BEGIN */
1620:
1621:                            // Previous code
1622:                            /*                                        if (first == tab[index]) {
1623:                                                                            //  Add to front of list
1624:                                                                            Entry newEntry = new Entry(hash, key, value, first);
1625:                                                                            tab[index] = newEntry;
1626:                                                                            if (++count >= threshold) rehash();
1627:                                                                            else recordModification(newEntry);
1628:                                                                            return null; */
1629:
1630:                            Object oldValue = null;
1631:
1632:                            // Remove an item if the cache is full
1633:                            if (size() >= maxEntries) {
1634:                                // part of fix CACHE-255: method should return old value
1635:                                oldValue = remove(removeItem(), false, false);
1636:                            }
1637:
1638:                            if (first == tab[index]) {
1639:                                //  Add to front of list
1640:                                Entry newEntry = null;
1641:
1642:                                if (memoryCaching) {
1643:                                    newEntry = new Entry(hash, key, value,
1644:                                            first);
1645:                                } else {
1646:                                    newEntry = new Entry(hash, key, NULL, first);
1647:                                }
1648:
1649:                                tab[index] = newEntry;
1650:                                itemPut(key);
1651:
1652:                                // Persist if required
1653:                                if (persist && !overflowPersistence) {
1654:                                    persistStore(key, value);
1655:                                }
1656:
1657:                                // If we have a CacheEntry, update the group lookups
1658:                                if (value instanceof  CacheEntry) {
1659:                                    updateGroups(null, (CacheEntry) value,
1660:                                            persist);
1661:                                }
1662:
1663:                                if (++count >= threshold) {
1664:                                    rehash();
1665:                                } else {
1666:                                    recordModification(newEntry);
1667:                                }
1668:
1669:                                return oldValue;
1670:
1671:                                /** OpenSymphony END  */
1672:                            } else {
1673:                                // wrong list -- retry
1674:
1675:                                /** OpenSymphony BEGIN */
1676:
1677:                                /* Previous code
1678:                                return sput(key, value, hash);*/
1679:                                return sput(key, value, hash, persist);
1680:
1681:                                /** OpenSymphony END */
1682:                            }
1683:                        }
1684:                    } else if ((key == e.key)
1685:                            || ((e.hash == hash) && key.equals(e.key))) {
1686:                        // synch to avoid race with remove and to
1687:                        // ensure proper serialization of multiple replaces
1688:                        synchronized (this ) {
1689:                            tab = table;
1690:
1691:                            Object oldValue = e.value;
1692:
1693:                            // [CACHE-118] - get the old cache entry even if there's no memory cache
1694:                            if (persist && (oldValue == NULL)) {
1695:                                oldValue = persistRetrieve(key);
1696:                            }
1697:
1698:                            if ((first == tab[index]) && (oldValue != null)) {
1699:                                /** OpenSymphony BEGIN */
1700:
1701:                                /* Previous code
1702:                                e.value = value;
1703:                                return oldValue; */
1704:                                if (memoryCaching) {
1705:                                    e.value = value;
1706:                                }
1707:
1708:                                // Persist if required
1709:                                if (persist && overflowPersistence) {
1710:                                    persistRemove(key);
1711:                                } else if (persist) {
1712:                                    persistStore(key, value);
1713:                                }
1714:
1715:                                updateGroups(oldValue, value, persist);
1716:                                itemPut(key);
1717:
1718:                                return oldValue;
1719:
1720:                                /**        OpenSymphony END */
1721:                            } else {
1722:                                // retry if wrong list or lost race against concurrent remove
1723:
1724:                                /** OpenSymphony BEGIN */
1725:
1726:                                /* Previous code
1727:                                return sput(key, value, hash);*/
1728:                                return sput(key, value, hash, persist);
1729:
1730:                                /** OpenSymphony END */
1731:                            }
1732:                        }
1733:                    } else {
1734:                        e = e.next;
1735:                    }
1736:                }
1737:            }
1738:
1739:            private synchronized Object remove(Object key,
1740:                    boolean invokeAlgorithm, boolean forcePersist)
1741:            /* Previous code
1742:            public Object remove(Object key) */
1743:
1744:            /** OpenSymphony END */
1745:            {
1746:                /*
1747:                  Strategy:
1748:
1749:                  Find the entry, then
1750:                    1. Set value field to null, to force get() to retry
1751:                    2. Rebuild the list without this entry.
1752:                       All entries following removed node can stay in list, but
1753:                       all preceeding ones need to be cloned.  Traversals rely
1754:                       on this strategy to ensure that elements will not be
1755:                      repeated during iteration.
1756:                 */
1757:
1758:                /** OpenSymphony BEGIN */
1759:                if (key == null) {
1760:                    return null;
1761:                }
1762:
1763:                /** OpenSymphony END */
1764:                int hash = hash(key);
1765:                Entry[] tab = table;
1766:                int index = hash & (tab.length - 1);
1767:                Entry first = tab[index];
1768:                Entry e = first;
1769:
1770:                for (;;) {
1771:                    if (e == null) {
1772:                        tab = getTableForReading();
1773:
1774:                        if (first == tab[index]) {
1775:                            return null;
1776:                        } else {
1777:                            // Wrong list -- must restart traversal at new first
1778:
1779:                            /** OpenSymphony BEGIN */
1780:
1781:                            /* Previous Code
1782:                            return sremove(key, hash); */
1783:                            return sremove(key, hash, invokeAlgorithm);
1784:
1785:                            /** OpenSymphony END */
1786:                        }
1787:                    } else if ((key == e.key)
1788:                            || ((e.hash == hash) && key.equals(e.key))) {
1789:                        synchronized (this ) {
1790:                            tab = table;
1791:
1792:                            Object oldValue = e.value;
1793:                            if (persistenceListener != null
1794:                                    && (oldValue == NULL)) {
1795:                                oldValue = persistRetrieve(key);
1796:                            }
1797:
1798:                            // re-find under synch if wrong list
1799:                            if ((first != tab[index]) || (oldValue == null)) {
1800:                                /** OpenSymphony BEGIN */
1801:
1802:                                /* Previous Code
1803:                                return sremove(key, hash); */
1804:                                return sremove(key, hash, invokeAlgorithm);
1805:                            }
1806:
1807:                            /** OpenSymphony END */
1808:                            e.value = null;
1809:                            count--;
1810:
1811:                            /** OpenSymphony BEGIN */
1812:                            if (forcePersist
1813:                                    || (!unlimitedDiskCache && !overflowPersistence)) {
1814:                                persistRemove(e.key);
1815:                                // If we have a CacheEntry, update the group lookups
1816:                                if (oldValue instanceof  CacheEntry) {
1817:                                    CacheEntry oldEntry = (CacheEntry) oldValue;
1818:                                    removeGroupMappings(oldEntry.getKey(),
1819:                                            oldEntry.getGroups(), true);
1820:                                }
1821:                            } else {
1822:                                // only remove from memory groups
1823:                                if (oldValue instanceof  CacheEntry) {
1824:                                    CacheEntry oldEntry = (CacheEntry) oldValue;
1825:                                    removeGroupMappings(oldEntry.getKey(),
1826:                                            oldEntry.getGroups(), false);
1827:                                }
1828:                            }
1829:
1830:                            if (!forcePersist && overflowPersistence
1831:                                    && ((size() + 1) >= maxEntries)) {
1832:                                persistStore(key, oldValue);
1833:                                // add key to persistent groups but NOT to the memory groups
1834:                                if (oldValue instanceof  CacheEntry) {
1835:                                    CacheEntry oldEntry = (CacheEntry) oldValue;
1836:                                    addGroupMappings(oldEntry.getKey(),
1837:                                            oldEntry.getGroups(), true, false);
1838:                                }
1839:                            }
1840:
1841:                            if (invokeAlgorithm) {
1842:                                itemRemoved(key);
1843:                            }
1844:
1845:                            // introduced to fix bug CACHE-255 
1846:                            if (oldValue instanceof  CacheEntry) {
1847:                                CacheEntry oldEntry = (CacheEntry) oldValue;
1848:                                oldValue = oldEntry.getContent();
1849:                            }
1850:
1851:                            /** OpenSymphony END */
1852:                            Entry head = e.next;
1853:
1854:                            for (Entry p = first; p != e; p = p.next) {
1855:                                head = new Entry(p.hash, p.key, p.value, head);
1856:                            }
1857:
1858:                            tab[index] = head;
1859:                            recordModification(head);
1860:
1861:                            return oldValue;
1862:                        }
1863:                    } else {
1864:                        e = e.next;
1865:                    }
1866:                }
1867:            }
1868:
1869:            /**
1870:             * Remove this CacheEntry from the groups it no longer belongs to.
1871:             * We have to treat the memory and disk group mappings separately so they remain
1872:             * valid for their corresponding memory/disk caches. (eg if mem is limited
1873:             * to 100 entries and disk is unlimited, the group mappings will be
1874:             * different).
1875:             *
1876:             * @param key The cache key that we are removing from the groups.
1877:             * @param oldGroups the set of groups we want to remove the cache entry
1878:             * from.
1879:             * @param persist A flag to indicate whether the keys should be removed
1880:             * from the persistent cache layer.
1881:             */
1882:            private void removeGroupMappings(String key, Set oldGroups,
1883:                    boolean persist) {
1884:                if (oldGroups == null) {
1885:                    return;
1886:                }
1887:
1888:                for (Iterator it = oldGroups.iterator(); it.hasNext();) {
1889:                    String groupName = (String) it.next();
1890:
1891:                    // Update the in-memory groups
1892:                    if (memoryCaching && (this .groups != null)) {
1893:                        Set memoryGroup = (Set) groups.get(groupName);
1894:
1895:                        if (memoryGroup != null) {
1896:                            memoryGroup.remove(key);
1897:
1898:                            if (memoryGroup.isEmpty()) {
1899:                                groups.remove(groupName);
1900:                            }
1901:                        }
1902:                    }
1903:
1904:                    // Update the persistent group maps
1905:                    if (persist) {
1906:                        Set persistentGroup = persistRetrieveGroup(groupName);
1907:
1908:                        if (persistentGroup != null) {
1909:                            persistentGroup.remove(key);
1910:
1911:                            if (persistentGroup.isEmpty()) {
1912:                                persistRemoveGroup(groupName);
1913:                            } else {
1914:                                persistStoreGroup(groupName, persistentGroup);
1915:                            }
1916:                        }
1917:                    }
1918:                }
1919:            }
1920:
1921:            /**
1922:             * Updates the groups to reflect the differences between the old and new
1923:             * cache entries. Either of the old or new values can be <code>null</code>
1924:             * or contain a <code>null</code> group list, in which case the entry's
1925:             * groups will all be added or removed respectively.
1926:             *
1927:             * @param oldValue The old CacheEntry that is being replaced.
1928:             * @param newValue The new CacheEntry that is being inserted.
1929:             */
1930:            private void updateGroups(Object oldValue, Object newValue,
1931:                    boolean persist) {
1932:                // If we have/had a CacheEntry, update the group lookups
1933:                boolean oldIsCE = oldValue instanceof  CacheEntry;
1934:                boolean newIsCE = newValue instanceof  CacheEntry;
1935:
1936:                if (newIsCE && oldIsCE) {
1937:                    updateGroups((CacheEntry) oldValue, (CacheEntry) newValue,
1938:                            persist);
1939:                } else if (newIsCE) {
1940:                    updateGroups(null, (CacheEntry) newValue, persist);
1941:                } else if (oldIsCE) {
1942:                    updateGroups((CacheEntry) oldValue, null, persist);
1943:                }
1944:            }
1945:
1946:            /**
1947:             * Updates the groups to reflect the differences between the old and new cache entries.
1948:             * Either of the old or new values can be <code>null</code>
1949:             * or contain a <code>null</code> group list, in which case the entry's
1950:             * groups will all be added or removed respectively.
1951:             *
1952:             * @param oldValue The old CacheEntry that is being replaced.
1953:             * @param newValue The new CacheEntry that is being inserted.
1954:             */
1955:            private void updateGroups(CacheEntry oldValue, CacheEntry newValue,
1956:                    boolean persist) {
1957:                Set oldGroups = null;
1958:                Set newGroups = null;
1959:
1960:                if (oldValue != null) {
1961:                    oldGroups = oldValue.getGroups();
1962:                }
1963:
1964:                if (newValue != null) {
1965:                    newGroups = newValue.getGroups();
1966:                }
1967:
1968:                // Get the names of the groups to remove
1969:                if (oldGroups != null) {
1970:                    Set removeFromGroups = new HashSet();
1971:
1972:                    for (Iterator it = oldGroups.iterator(); it.hasNext();) {
1973:                        String groupName = (String) it.next();
1974:
1975:                        if ((newGroups == null)
1976:                                || !newGroups.contains(groupName)) {
1977:                            // We need to remove this group
1978:                            removeFromGroups.add(groupName);
1979:                        }
1980:                    }
1981:
1982:                    removeGroupMappings(oldValue.getKey(), removeFromGroups,
1983:                            persist);
1984:                }
1985:
1986:                // Get the names of the groups to add
1987:                if (newGroups != null) {
1988:                    Set addToGroups = new HashSet();
1989:
1990:                    for (Iterator it = newGroups.iterator(); it.hasNext();) {
1991:                        String groupName = (String) it.next();
1992:
1993:                        if ((oldGroups == null)
1994:                                || !oldGroups.contains(groupName)) {
1995:                            // We need to add this group
1996:                            addToGroups.add(groupName);
1997:                        }
1998:                    }
1999:
2000:                    addGroupMappings(newValue.getKey(), addToGroups, persist,
2001:                            true);
2002:                }
2003:            }
2004:
2005:            /**
2006:             * AbstractConcurrentReadCache collision list entry.
2007:             */
2008:            protected static class Entry implements  Map.Entry {
2009:                protected final Entry next;
2010:                protected final Object key;
2011:
2012:                /*
2013:                   The use of volatile for value field ensures that
2014:                   we can detect status changes without synchronization.
2015:                   The other fields are never changed, and are
2016:                   marked as final.
2017:                 */
2018:                protected final int hash;
2019:                protected volatile Object value;
2020:
2021:                Entry(int hash, Object key, Object value, Entry next) {
2022:                    this .hash = hash;
2023:                    this .key = key;
2024:                    this .next = next;
2025:                    this .value = value;
2026:                }
2027:
2028:                // Map.Entry Ops
2029:                public Object getKey() {
2030:                    return key;
2031:                }
2032:
2033:                /**
2034:                 * Set the value of this entry.
2035:                 * Note: In an entrySet or
2036:                 * entrySet.iterator), unless the set or iterator is used under
2037:                 * synchronization of the table as a whole (or you can otherwise
2038:                 * guarantee lack of concurrent modification), <tt>setValue</tt>
2039:                 * is not strictly guaranteed to actually replace the value field
2040:                 * obtained via the <tt>get</tt> operation of the underlying hash
2041:                 * table in multithreaded applications.  If iterator-wide
2042:                 * synchronization is not used, and any other concurrent
2043:                 * <tt>put</tt> or <tt>remove</tt> operations occur, sometimes
2044:                 * even to <em>other</em> entries, then this change is not
2045:                 * guaranteed to be reflected in the hash table. (It might, or it
2046:                 * might not. There are no assurances either way.)
2047:                 *
2048:                 * @param      value   the new value.
2049:                 * @return     the previous value, or null if entry has been detectably
2050:                 * removed.
2051:                 * @exception  NullPointerException  if the value is <code>null</code>.
2052:                 *
2053:                 **/
2054:                public Object setValue(Object value) {
2055:                    if (value == null) {
2056:                        throw new NullPointerException();
2057:                    }
2058:
2059:                    Object oldValue = this .value;
2060:                    this .value = value;
2061:
2062:                    return oldValue;
2063:                }
2064:
2065:                /**
2066:                 * Get the value.
2067:                 * Note: In an entrySet or entrySet.iterator,
2068:                 * unless the set or iterator is used under synchronization of the
2069:                 * table as a whole (or you can otherwise guarantee lack of
2070:                 * concurrent modification), <tt>getValue</tt> <em>might</em>
2071:                 * return null, reflecting the fact that the entry has been
2072:                 * concurrently removed. However, there are no assurances that
2073:                 * concurrent removals will be reflected using this method.
2074:                 *
2075:                 * @return     the current value, or null if the entry has been
2076:                 * detectably removed.
2077:                 **/
2078:                public Object getValue() {
2079:                    return value;
2080:                }
2081:
2082:                public boolean equals(Object o) {
2083:                    if (!(o instanceof  Map.Entry)) {
2084:                        return false;
2085:                    }
2086:
2087:                    Map.Entry e = (Map.Entry) o;
2088:
2089:                    if (!key.equals(e.getKey())) {
2090:                        return false;
2091:                    }
2092:
2093:                    Object v = value;
2094:
2095:                    return (v == null) ? (e.getValue() == null) : v.equals(e
2096:                            .getValue());
2097:                }
2098:
2099:                public int hashCode() {
2100:                    Object v = value;
2101:
2102:                    return hash ^ ((v == null) ? 0 : v.hashCode());
2103:                }
2104:
2105:                public String toString() {
2106:                    return key + "=" + value;
2107:                }
2108:
2109:                protected Object clone() {
2110:                    return new Entry(hash, key, value, ((next == null) ? null
2111:                            : (Entry) next.clone()));
2112:                }
2113:            }
2114:
2115:            protected class HashIterator implements  Iterator, Enumeration {
2116:                protected final Entry[] tab; // snapshot of table
2117:                protected Entry entry = null; // current node of slot
2118:                protected Entry lastReturned = null; // last node returned by next
2119:                protected Object currentKey; // key for current node
2120:                protected Object currentValue; // value for current node
2121:                protected int index; // current slot
2122:
2123:                protected HashIterator() {
2124:                    tab = AbstractConcurrentReadCache.this .getTableForReading();
2125:                    index = tab.length - 1;
2126:                }
2127:
2128:                public boolean hasMoreElements() {
2129:                    return hasNext();
2130:                }
2131:
2132:                public boolean hasNext() {
2133:                    /*
2134:                      currentkey and currentValue are set here to ensure that next()
2135:                      returns normally if hasNext() returns true. This avoids
2136:                      surprises especially when final element is removed during
2137:                      traversal -- instead, we just ignore the removal during
2138:                      current traversal.
2139:                     */
2140:                    for (;;) {
2141:                        if (entry != null) {
2142:                            Object v = entry.value;
2143:
2144:                            if (v != null) {
2145:                                currentKey = entry.key;
2146:                                currentValue = v;
2147:
2148:                                return true;
2149:                            } else {
2150:                                entry = entry.next;
2151:                            }
2152:                        }
2153:
2154:                        while ((entry == null) && (index >= 0)) {
2155:                            entry = tab[index--];
2156:                        }
2157:
2158:                        if (entry == null) {
2159:                            currentKey = currentValue = null;
2160:
2161:                            return false;
2162:                        }
2163:                    }
2164:                }
2165:
2166:                public Object next() {
2167:                    if ((currentKey == null) && !hasNext()) {
2168:                        throw new NoSuchElementException();
2169:                    }
2170:
2171:                    Object result = returnValueOfNext();
2172:                    lastReturned = entry;
2173:                    currentKey = currentValue = null;
2174:                    entry = entry.next;
2175:
2176:                    return result;
2177:                }
2178:
2179:                public Object nextElement() {
2180:                    return next();
2181:                }
2182:
2183:                public void remove() {
2184:                    if (lastReturned == null) {
2185:                        throw new IllegalStateException();
2186:                    }
2187:
2188:                    AbstractConcurrentReadCache.this .remove(lastReturned.key);
2189:                }
2190:
2191:                protected Object returnValueOfNext() {
2192:                    return entry;
2193:                }
2194:            }
2195:
2196:            protected class KeyIterator extends HashIterator {
2197:                protected Object returnValueOfNext() {
2198:                    return currentKey;
2199:                }
2200:            }
2201:
2202:            protected class ValueIterator extends HashIterator {
2203:                protected Object returnValueOfNext() {
2204:                    return currentValue;
2205:                }
2206:            }
2207:        }
www.java2java.com | Contact Us
Copyright 2010 - 2030 Java Source and Support. All rights reserved.
All other trademarks are property of their respective owners.