Learning JDK Code - Collection


Learning JDK Code - Collection
HashMap vs Hashtable
The Hashtable is among the original collection classes in Java.Hashtable extends the Dictionary class, which as the Javadocs state, is obsolete and has been replaced by the Map interface. HashMap is part of the new Collections Framework, added with Java 2.

The key difference between the two is that access to the Hashtable is synchronized on the table while access to the HashMap is not synchronized.This makes HashMap better for non-threaded applications, as unsynchronized Objects typically perform better than synchronized ones

HashMap has a more complex hashing algorithm then Hashtable. It takes the hash value from the key and then hashes it again (double hashing). This can improve the distribution of the keys and hence the performance of the Map.

Another difference is that iterator in the HashMap is fail-safe while the enumerator for the Hashtable isn't. If we change the map while iterating, it will throw exception.

Third difference is that HashMap permits null values in it, while Hashtable doesn't. Also note that only one NULL value is allowed as a key in HashMap. HashMap does not allow multiple keys to be NULL. Nevertheless, it can have multiple NULL values.
1. Initial size and resize
HashMap uses power-of-two length tables, default capacity is 16.
static final int DEFAULT_INITIAL_CAPACITY = 16;
static final int MAXIMUM_CAPACITY = 1 << 30;
static final float DEFAULT_LOAD_FACTOR = 0.75f;
public HashMap(int initialCapacity, float loadFactor) {
    // Find a power of 2 >= initialCapacity
    int capacity = 1;
    while (capacity < initialCapacity)
        capacity <<= 1;
}
User can specify map size, hashmap would find a pow of 2 whose value is just larger than user specified size.
resize(2 * table.length);

Hashset would construct a new, empty hashtable with the specified initial capacity.
Default size is 11.
public Hashtable(int initialCapacity, float loadFactor) {
    if (initialCapacity==0)
        initialCapacity = 1;
this.loadFactor = loadFactor;
table = new Entry[initialCapacity];
threshold = (int)(initialCapacity * loadFactor);
}
public Hashtable() {
    this(11, 0.75f);
}
int newCapacity = oldCapacity * 2 + 1;

HashMap hash function
The way the HashMap is implemented relies on the hashCode function being sufficiently well implemented. If you have many collisions on the lower bits, the HashMap will not perform well.

Because the implementation of hashCode is outside of the control of HashMap (every object can implement their own), they supply an additional hash function that shifts the object's hashCode around a little to ensure that the lower bits are distributed more randomly.

HashMap uses power-of-two length tables, and hashes keys by masking away the higher bits and taking only the lower bits of their hash code.

So hash() attempts to bring relevancy to the higher bits, which otherwise would get masked away (indexFor basically discards the higher bits of h and takes only the lower k bits where length == (1 << k)).

int hash = hash(key.hashCode());
int i = indexFor(hash, table.length);
* Applies a supplemental hash function to a given hashCode, which
 * defends against poor quality hash functions.  This is critical
 * because HashMap uses power-of-two length hash tables, that
 * otherwise encounter collisions for hashCodes that do not differ
 * in lower bits. Note: Null keys always map to hash 0, thus index 0.
static int hash(int h) {
    // This function ensures that hashCodes that differ only by
    // constant multiples at each bit position have a bounded
    // number of collisions (approximately 8 at default load factor).
    h ^= (h >>> 20) ^ (h >>> 12);
    return h ^ (h >>> 7) ^ (h >>> 4);
}
static int indexFor(int h, int length) {
    return h & (length-1);
}          
Hashtable hash function
By doing the more expensive % operation (instead of simple bit masking), the performance of Hashtable is less sensitive to hash codes with poor distribution in the lower bits (especially if table.length is a prime number).
int hash = key.hashCode();
int index = (hash & 0x7FFFFFFF) % tab.length;
HashMap[Sun JDK]
 public class HashMap extends AbstractMap implements Map, Cloneable, Serializable  
 {  
   // The next size value at which to resize (capacity * load factor).  
   int threshold;    
   public HashMap() {  
     this.loadFactor = DEFAULT_LOAD_FACTOR;  
     threshold = (int)(DEFAULT_INITIAL_CAPACITY * DEFAULT_LOAD_FACTOR); //  
     table = new Entry[DEFAULT_INITIAL_CAPACITY];  
     init(); // subclass would overwite this  
   }  
   public V put(K key, V value) {  
     if (key == null)  
       return putForNullKey(value); // We can put null key and null value to HashMap.  
     int hash = hash(key.hashCode());  
     int i = indexFor(hash, table.length);  
     for (Entry e = table[i]; e != null; e = e.next) {  
       Object k;  
       if (e.hash == hash && ((k = e.key) == key || key.equals(k))) // call e.key == key  
       {  
         V oldValue = e.value;  
         e.value = value;  
         e.recordAccess(this);  
         return oldValue;  
       }  
     }  
     modCount++;  
     addEntry(hash, key, value, i);  
     return null;  
   }  
   public V get(Object key) {  
     if (key == null)  
       return getForNullKey();  
     int hash = hash(key.hashCode());  
     for (Entry e = table[indexFor(hash, table.length)];  
        e != null;  
        e = e.next) {  
       Object k;  
       if (e.hash == hash && ((k = e.key) == key || key.equals(k)))  
         return e.value;  
     }  
     return null;  
   }  
   void addEntry(int hash, K key, V value, int bucketIndex) {  
    Entry e = table[bucketIndex];  
     table[bucketIndex] = new Entry(hash, key, value, e);  
     if (size++ >= threshold)  
       resize(2 * table.length);  
 }  
   public boolean containsKey(Object key) {  
     return getEntry(key) != null;  
   }  
   final Entry<K,V> getEntry(Object key) {  
     int hash = (key == null) ? 0 : hash(key.hashCode());  
     for (Entry<K,V> e = table[indexFor(hash, table.length)];  
        e != null;  
        e = e.next) {  
       Object k;  
       if (e.hash == hash &&  
         ((k = e.key) == key || (key != null && key.equals(k))))  
         return e;  
     }  
     return null;  
 }  
    /**  
    * Applies a supplemental hash function to a given hashCode, which  
    * defends against poor quality hash functions. This is critical  
    * because HashMap uses power-of-two length hash tables, that  
    * otherwise encounter collisions for hashCodes that do not differ  
    * in lower bits. Note: Null keys always map to hash 0, thus index 0.  
    */  
   static int hash(int h) {  
     // This function ensures that hashCodes that differ only by  
     // constant multiples at each bit position have a bounded  
     // number of collisions (approximately 8 at default load factor).  
     h ^= (h >>> 20) ^ (h >>> 12);  
     return h ^ (h >>> 7) ^ (h >>> 4);  
   }  
   static int indexFor(int h, int length) {  
     return h & (length-1);  
 }  
   public boolean containsValue(Object value) {  
      if (value == null)  
       return containsNullValue();  
      Entry[] tab = table;  
     for (int i = 0; i < tab.length ; i++)  
       for (Entry e = tab[i] ; e != null ; e = e.next)  
         if (value.equals(e.value))  
           return true;  
      return false;  
 }  
   static class Entry<K,V> implements Map.Entry<K,V> {  
     final K key;  
     V value;  
     Entry<K,V> next;  
     final int hash;  
     public final boolean equals(Object o) {  
       if (!(o instanceof Map.Entry))  
         return false;  
       Map.Entry e = (Map.Entry)o;  
       Object k1 = getKey();  
       Object k2 = e.getKey();  
       if (k1 == k2 || (k1 != null && k1.equals(k2))) {  
         Object v1 = getValue();  
         Object v2 = e.getValue();  
         if (v1 == v2 || (v1 != null && v1.equals(v2)))  
           return true;  
       }  
       return false;  
     }  
     public final int hashCode() {  
       return (key==null  ? 0 : key.hashCode()) ^  
           (value==null ? 0 : value.hashCode());  
     }  
   }  
   private abstract class HashIterator<E> implements Iterator<E> {  
     Entry<K,V> next;     // next entry to return  
     int expectedModCount;     // For fast-fail  
     int index;          // current slot  
     Entry<K,V> current;     // current entry  
     HashIterator() {  
       expectedModCount = modCount;  
       if (size > 0) { // advance to first entry  
         Entry[] t = table;  
         while (index < t.length && (next = t[index++]) == null)  
           ;  
       }  
     }  
     public final boolean hasNext() {  
       return next != null;  
     }  
     final Entry<K,V> nextEntry() {  
       if (modCount != expectedModCount)  
         throw new ConcurrentModificationException();  
       Entry<K,V> e = next;  
       if (e == null)  
         throw new NoSuchElementException();  
       if ((next = e.next) == null) {  
         Entry[] t = table;  
         while (index < t.length && (next = t[index++]) == null)  
           ;  
       }  
        current = e;  
       return e;  
     }  
     public void remove() {  
       if (current == null)  
         throw new IllegalStateException();  
       if (modCount != expectedModCount)  
         throw new ConcurrentModificationException();  
       Object k = current.key;  
       current = null;  
       HashMap.this.removeEntryForKey(k);  
       expectedModCount = modCount;  
     }  
   void resize(int newCapacity) {  
     Entry[] oldTable = table;  
     int oldCapacity = oldTable.length;  
     if (oldCapacity == MAXIMUM_CAPACITY) {  
       threshold = Integer.MAX_VALUE;  
       return;  
     }  
     Entry[] newTable = new Entry[newCapacity];  
     transfer(newTable);  
     table = newTable;  
     threshold = (int)(newCapacity * loadFactor);  
   }  
   void transfer(Entry[] newTable) {  
     Entry[] src = table;  
     int newCapacity = newTable.length;  
     for (int j = 0; j < src.length; j++) {  
       Entry<K,V> e = src[j];  
       if (e != null) {  
         src[j] = null;  
         do {  
           Entry<K,V> next = e.next;  
           int i = indexFor(e.hash, newCapacity);  
           e.next = newTable[i];  
           newTable[i] = e;  
           e = next;  
         } while (e != null);  
       }  
     }  
   }  
   private final class ValueIterator extends HashIterator<V> {  
     public V next() {  
       return nextEntry().value;  
     }  
   }  
   private final class KeyIterator extends HashIterator<K> {  
     public K next() {  
       return nextEntry().getKey();  
     }  
   }  
   private final class EntryIterator extends HashIterator<Map.Entry<K,V>> {  
     public Map.Entry<K,V> next() {  
       return nextEntry();  
     }  
   }  
   // Subclass overrides these to alter behavior of views' iterator() method  
   Iterator<K> newKeyIterator()  {  
     return new KeyIterator();  
   }  
   Iterator<V> newValueIterator()  {  
     return new ValueIterator();  
   }  
   Iterator<Map.Entry<K,V>> newEntryIterator()  {  
     return new EntryIterator();  
   }  
   // Views  
   private transient Set<Map.Entry<K,V>> entrySet = null;  
 }  
   public Set<K> keySet() {  
     Set<K> ks = keySet;  
     return (ks != null ? ks : (keySet = new KeySet()));  
   }  
   private final class KeySet extends AbstractSet<K> {  
     public Iterator<K> iterator() {  
       return newKeyIterator();  
     }  
     public int size() {  
       return size;  
     }  
     public boolean contains(Object o) {  
       return containsKey(o);  
     }  
     public boolean remove(Object o) {  
       return HashMap.this.removeEntryForKey(o) != null;  
     }  
     public void clear() {  
       HashMap.this.clear();  
     }  
 }  
   public Collection<V> values() {  
     Collection<V> vs = values;  
     return (vs != null ? vs : (values = new Values()));  
   }  
   private final class Values extends AbstractCollection<V> {  
     public Iterator<V> iterator() {  
       return newValueIterator();  
     }  
     public int size() {  
       return size;  
     }  
     public boolean contains(Object o) {  
       return containsValue(o);  
     }  
     public void clear() {  
       HashMap.this.clear();  
     }  
 }  
   private void writeObject(java.io.ObjectOutputStream s)  
     throws IOException  
   {  
      Iterator<Map.Entry<K,V>> i =  
        (size > 0) ? entrySet0().iterator() : null;  
      // Write out the threshold, loadfactor, and any hidden stuff  
      s.defaultWriteObject();  
      // Write out number of buckets  
      s.writeInt(table.length);  
      // Write out size (number of Mappings)  
      s.writeInt(size);  
     // Write out keys and values (alternating)  
      if (i != null) {  
        while (i.hasNext()) {  
           Map.Entry<K,V> e = i.next();  
           s.writeObject(e.getKey());  
           s.writeObject(e.getValue());  
        }  
     }  
   }  
   private void readObject(java.io.ObjectInputStream s)  
      throws IOException, ClassNotFoundException  
   {  
      // Read in the threshold, loadfactor, and any hidden stuff  
      s.defaultReadObject();  
      // Read in number of buckets and allocate the bucket array;  
      int numBuckets = s.readInt();  
      table = new Entry[numBuckets];  
    init(); // Give subclass a chance to do its thing.  
      // Read in size (number of Mappings)  
      int size = s.readInt();  
      // Read the keys and values, and put the mappings in the HashMap  
      for (int i=0; i<size; i++) {  
        K key = (K) s.readObject();  
        V value = (V) s.readObject();  
        putForCreate(key, value);  
      }  
 }  
 }  
Hashtable
 public class Hashtable<K,V>  
   extends Dictionary<K,V>  
 implements Map<K,V>, Cloneable, java.io.Serializable {  
   private transient Entry[] table;  
   private transient int count;  
   private int threshold;  
   private float loadFactor;  
   public Hashtable(int initialCapacity, float loadFactor) {  
      this.loadFactor = loadFactor;  
      table = new Entry[initialCapacity];  
      threshold = (int)(initialCapacity * loadFactor);  
   }  
   public Hashtable(Map<? extends K, ? extends V> t) {  
      this(Math.max(2*t.size(), 11), 0.75f);  
      putAll(t);  
 }  
   public synchronized V get(Object key) {  
      Entry tab[] = table;  
      int hash = key.hashCode();  
      int index = (hash & 0x7FFFFFFF) % tab.length;  
      for (Entry<K,V> e = tab[index] ; e != null ; e = e.next) {  
        if ((e.hash == hash) && e.key.equals(key)) {  
           return e.value;  
        }  
      }  
      return null;  
   }  
   public synchronized V put(K key, V value) {  
      // Make sure the value is not null  
      if (value == null) {  
        throw new NullPointerException();  
      }  
      // Makes sure the key is not already in the hashtable.  
      Entry tab[] = table;  
      int hash = key.hashCode();  
      int index = (hash & 0x7FFFFFFF) % tab.length;  
      for (Entry<K,V> e = tab[index] ; e != null ; e = e.next) {  
        if ((e.hash == hash) && e.key.equals(key)) {  
           V old = e.value;  
           e.value = value;  
           return old;  
        }  
      }  
      modCount++;  
      if (count >= threshold) {  
        // Rehash the table if the threshold is exceeded  
        rehash();  
       tab = table;  
       index = (hash & 0x7FFFFFFF) % tab.length;  
      }  
      // Creates the new entry.  
      Entry<K,V> e = tab[index];  
      tab[index] = new Entry<K,V>(hash, key, value, e);  
      count++;  
      return null;  
 }  
   protected void rehash() {  
      int oldCapacity = table.length;  
      Entry[] oldMap = table;  
      int newCapacity = oldCapacity * 2 + 1;  
      Entry[] newMap = new Entry[newCapacity];  
      modCount++;  
      threshold = (int)(newCapacity * loadFactor);  
      table = newMap;  
      for (int i = oldCapacity ; i-- > 0 ;) {  
        for (Entry<K,V> old = oldMap[i] ; old != null ; ) {  
           Entry<K,V> e = old;  
           old = old.next;  
           int index = (e.hash & 0x7FFFFFFF) % newCapacity;  
           e.next = newMap[index];  
           newMap[index] = e;  
        }  
      }  
 }  
   public synchronized V remove(Object key) {  
      Entry tab[] = table;  
      int hash = key.hashCode();  
      int index = (hash & 0x7FFFFFFF) % tab.length;  
      for (Entry<K,V> e = tab[index], prev = null ; e != null ; prev = e, e = e.next) {  
        if ((e.hash == hash) && e.key.equals(key)) {  
           modCount++;  
           if (prev != null) {  
             prev.next = e.next;  
           } else {  
             tab[index] = e.next;  
           }  
           count--;  
           V oldValue = e.value;  
           e.value = null;  
           return oldValue;  
        }  
      }  
      return null;  
   }  
   public synchronized void putAll(Map<? extends K, ? extends V> t) {  
     for (Map.Entry<? extends K, ? extends V> e : t.entrySet())  
       put(e.getKey(), e.getValue());  
   }  
   public synchronized void clear() {  
      Entry tab[] = table;  
      modCount++;  
      for (int index = tab.length; --index >= 0; )  
        tab[index] = null;  
      count = 0;  
 }  
   public synchronized boolean containsKey(Object key) {  
      Entry tab[] = table;  
      int hash = key.hashCode();  
      int index = (hash & 0x7FFFFFFF) % tab.length;  
      for (Entry<K,V> e = tab[index] ; e != null ; e = e.next) {  
        if ((e.hash == hash) && e.key.equals(key)) {  
           return true;  
        }  
      }  
      return false;  
 }  
 }  
 public abstract class AbstractMap<K,V> implements Map<K,V> {  
   public int hashCode() {  
      int h = 0;  
      Iterator<Entry<K,V>> i = entrySet().iterator();  
      while (i.hasNext())  
        h += i.next().hashCode();  
      return h;  
   }  
    */  
   public boolean equals(Object o) {  
      if (o == this)  
        return true;  
      if (!(o instanceof Map))  
        return false;  
      Map<K,V> m = (Map<K,V>) o;  
      if (m.size() != size())  
        return false;  
     try {  
       Iterator<Entry<K,V>> i = entrySet().iterator();  
       while (i.hasNext()) {  
         Entry<K,V> e = i.next();  
           K key = e.getKey();  
         V value = e.getValue();  
         if (value == null) {  
           if (!(m.get(key)==null && m.containsKey(key)))  
             return false;  
         } else {  
           if (!value.equals(m.get(key)))  
             return false;  
         }  
       }  
     } catch (ClassCastException unused) {  
       return false;  
     } catch (NullPointerException unused) {  
       return false;  
     }  
      return true;  
 }  
 }  

LinkedHashMap[TODO]
public class LinkedHashMap extends HashMap  implements Map
{
    // The head of the doubly linked list.
    private transient Entry header;
  
     * The iteration ordering method for this linked hash map: true
     * for access-order, false for insertion-order.
    private final boolean accessOrder;  
    void init() {
        header = new Entry(-1, null, null, null);
        header.before = header.after = header;
    }  
}
HashSet
public class HashSet extends AbstractSet implements Set, Cloneable, java.io.Serializable
{
    private transient HashMap map;
    // Dummy value to associate with an Object in the backing Map
    private static final Object PRESENT = new Object();
    public HashSet() {map = new HashMap();}
    public boolean add(E e) {return map.put(e, PRESENT)==null;}
}
TreeMap
public class TreeMap extends AbstractMap
    implements NavigableMap, Cloneable, java.io.Serializable
// it does not extend HashMap    
{
    private final Comparator comparator;
private transient Entry root = null;
    private transient int modCount = 0;
    // private Entry getEntry(Object key)
     * Returns a Set view of the keys contained in this map.  The set's
     * iterator will return the keys in ascending order.  The map is backed by
     * this TreeMap instance, so changes to this map are reflected in
     * the Set, and vice-versa.  The Set supports element removal, which
     * removes the corresponding mapping from the map, via the
     * Iterator.remove, Set.remove, removeAll,
     * retainAll, and clear operations.  It does not support
     * the add or addAll operations.
     * @return a set view of the keys contained in this TreeMap.
    public Set keySet() {
        if (keySet == null) {
            keySet = new AbstractSet() {
                public Iterator iterator() {
                    return new KeyIterator();
                }

                public int size() {
                    return TreeMap.this.size();
                }

                public boolean contains(Object o) {
                    return containsKey(o);
                }

                public boolean remove(Object o) {
                    int oldSize = size;
                    TreeMap.this.remove(o);
                    return size != oldSize;
                }

                public void clear() {
                    TreeMap.this.clear();
                }
            };
        }
        return keySet;
    }    
    public Collection values() {
        if (values == null) {
            values = new AbstractCollection() {
                public Iterator iterator() {
                    return new ValueIterator();
                }

                public int size() {
                    return TreeMap.this.size();
                }

                public boolean contains(Object o) {
                    for (Entry e = firstEntry(); e != null; e = successor(e))
                        if (valEquals(e.getValue(), o))
                            return true;
                    return false;
                }

                public boolean remove(Object o) {
                    for (Entry e = firstEntry(); e != null; e = successor(e)) {
                        if (valEquals(e.getValue(), o)) {
                            deleteEntry(e);
                            return true;
                        }
                    }
                    return false;
                }

                public void clear() {
                    TreeMap.this.clear();
                }
            };
        }
        return values;
    }
    public boolean containsValue(Object value) {
        return (root==null ? false :
                (value==null ? valueSearchNull(root)
                             : valueSearchNonNull(root, value)));
    }
    private boolean valueSearchNull(Entry n) {
        if (n.value == null)
            return true;

        // Check left and right subtrees for value
        return (n.left  != null && valueSearchNull(n.left)) ||
               (n.right != null && valueSearchNull(n.right));
    }
    private boolean valueSearchNonNull(Entry n, Object value) {
        // Check this node for the value
        if (value.equals(n.value))
            return true;
        // Check left and right subtrees for value
        return (n.left  != null && valueSearchNonNull(n.left, value)) ||
               (n.right != null && valueSearchNonNull(n.right, value));
    }    
    static final class Entry implements Map.Entry {
        K key;
        V value;
        Entry left = null;
        Entry right = null;
        Entry parent;
        boolean color = BLACK;
     }
    // Returns the first Entry in the TreeMap (according to the TreeMap's
    // key-sort function).  Returns null if the TreeMap is empty.
    final Entry getFirstEntry() {
        Entry p = root;
        if (p != null)
            while (p.left != null)
                p = p.left;
        return p;
    }
    public boolean containsValue(Object value) {
        for (Entry e = getFirstEntry(); e != null; e = successor(e))
            if (valEquals(value, e.value))
                return true;
        return false;
    }    
    //final Entry getLastEntry()
    // Returns the successor of the specified Entry, or null if no such.
    static TreeMap.Entry successor(Entry t) {
        if (t == null)
            return null;
        else if (t.right != null) {
            Entry p = t.right;
            while (p.left != null)
                p = p.left;
            return p;
        } else {
            Entry p = t.parent;
            Entry ch = t;
            while (p != null && ch == p.right) {
                ch = p;
                p = p.parent;
            }
            return p;
        }
    }
    // Returns the predecessor of the specified Entry, or null if no such.
    static Entry predecessor(Entry t) {
        if (t == null)
            return null;
        else if (t.left != null) {
            Entry p = t.left;
            while (p.right != null)
                p = p.right;
            return p;
        } else {
            Entry p = t.parent;
            Entry ch = t;
            while (p != null && ch == p.left) {
                ch = p;
                p = p.parent;
            }
            return p;
        }
    }
}
[TODO]
How BlockingQueue and ConcurrentHashMap are implemented?
ThreadLocal
public class ThreadLocal {
    public T get() {
        Thread t = Thread.currentThread();
        ThreadLocalMap map = getMap(t);
        if (map != null) {
            ThreadLocalMap.Entry e = map.getEntry(this);
            if (e != null)
                return (T)e.value;
        }
        return setInitialValue();
    }
    public void set(T value) {
        Thread t = Thread.currentThread();
        ThreadLocalMap map = getMap(t);
        if (map != null)
            map.set(this, value);
        else
            createMap(t, value);
    }
    void createMap(Thread t, T firstValue) {
        t.threadLocals = new ThreadLocalMap(this, firstValue);
    }  
    ThreadLocalMap getMap(Thread t) {
        return t.threadLocals;
    }  
}
Post a Comment

Labels

Java (159) Lucene-Solr (112) Interview (61) All (58) J2SE (53) Algorithm (45) Soft Skills (38) Eclipse (33) Code Example (31) Linux (25) JavaScript (23) Spring (22) Windows (22) Web Development (20) Tools (19) Nutch2 (18) Bugs (17) Debug (16) Defects (14) Text Mining (14) J2EE (13) Network (13) Troubleshooting (13) PowerShell (11) Chrome (9) Design (9) How to (9) Learning code (9) Performance (9) Problem Solving (9) UIMA (9) html (9) Http Client (8) Maven (8) Security (8) bat (8) blogger (8) Big Data (7) Continuous Integration (7) Google (7) Guava (7) JSON (7) Shell (7) ANT (6) Coding Skills (6) Database (6) Lesson Learned (6) Programmer Skills (6) Scala (6) Tips (6) css (6) Algorithm Series (5) Cache (5) Dynamic Languages (5) IDE (5) System Design (5) adsense (5) xml (5) AIX (4) Code Quality (4) GAE (4) Git (4) Good Programming Practices (4) Jackson (4) Memory Usage (4) Miscs (4) OpenNLP (4) Project Managment (4) Spark (4) Testing (4) ads (4) regular-expression (4) Android (3) Apache Spark (3) Become a Better You (3) Concurrency (3) Eclipse RCP (3) English (3) Happy Hacking (3) IBM (3) J2SE Knowledge Series (3) JAX-RS (3) Jetty (3) Restful Web Service (3) Script (3) regex (3) seo (3) .Net (2) Android Studio (2) Apache (2) Apache Procrun (2) Architecture (2) Batch (2) Bit Operation (2) Build (2) Building Scalable Web Sites (2) C# (2) C/C++ (2) CSV (2) Career (2) Cassandra (2) Distributed (2) Fiddler (2) Firefox (2) Google Drive (2) Gson (2) How to Interview (2) Html Parser (2) Http (2) Image Tools (2) JQuery (2) Jersey (2) LDAP (2) Life (2) Logging (2) Python (2) Software Issues (2) Storage (2) Text Search (2) xml parser (2) AOP (1) Application Design (1) AspectJ (1) Chrome DevTools (1) Cloud (1) Codility (1) Data Mining (1) Data Structure (1) ExceptionUtils (1) Exif (1) Feature Request (1) FindBugs (1) Greasemonkey (1) HTML5 (1) Httpd (1) I18N (1) IBM Java Thread Dump Analyzer (1) JDK Source Code (1) JDK8 (1) JMX (1) Lazy Developer (1) Mac (1) Machine Learning (1) Mobile (1) My Plan for 2010 (1) Netbeans (1) Notes (1) Operating System (1) Perl (1) Problems (1) Product Architecture (1) Programming Life (1) Quality (1) Redhat (1) Redis (1) Review (1) RxJava (1) Solutions logs (1) Team Management (1) Thread Dump Analyzer (1) Visualization (1) boilerpipe (1) htm (1) ongoing (1) procrun (1) rss (1)

Popular Posts