Notepad++: Best Text Editor in Windows

IMHO, Notepad++ is the best text editor in Windows.
It provides great features like: auto-completion, periodic backup and session snapshot etc.

File Switch
Ctrl+Shift+O to switch between opened files

DSpellCehck

XML Tools
Pretty print XML fines Ctrl+Alt+Shift+B

JSTool
JSFormat Ctrl+Alt+M

MIME Tools
URL Encode/Decode
Base64 Encode/Decode

HEX Editor

TextFX

-- PS, as I am moving to Linux(Mac) based development, it's time for me to learn and use some text editors that works on all platforms. 
SO this is kind of summary what I learned about Notepad++ in the past several years.

Eclipse Plug-ins Developers Should Check Out

Lucene Similarity and Score

Changing Similarity in Solr
Check Solr wiki:
A (global)  declaration can be used to specify a custom Similarity implementation that you want Solr to use when dealing with your index.




  P
  L
  H2
  7

Custom Per-Field Similarity

  
    
    
      SPL
      DF
      H2
    
  


If no (global)  is configured in the schema.xml file, an implicit instance of DefaultSimilarityFactory is used.

Lucene scoring supports a number of pluggable information retrieval models, including:
Vector Space Model (VSM)
Probablistic Models such as Okapi BM25 and DFR
Language models



Changing Scoring — Similarity

Changing Similarity is an easy way to influence scoring, this is done at index-time withIndexWriterConfig.setSimilarity(Similarity) and at query-time with IndexSearcher.setSimilarity(Similarity). Be sure to use the same Similarity at query-time as at index-time (so that norms are encoded/decoded correctly);

The Scorer abstract class provides common scoring functionality for all Scorer implementations and is the heart of the Lucene scoring process. 


Similarity
SimilarityBase
TFIDFSimilarity
Lucene combines Boolean model (BM) of Information Retrieval with Vector Space Model (VSM) of Information Retrieval - documents "approved" by BM are scored by VSM.
In VSM, documents and queries are represented as weighted vectors in a multi-dimensional space, where each distinct index term is a dimension, and weights are Tf-idf values.


score(q,d)   =   coord(q,d)  ·  queryNorm(q)  · ( tf(t in d)  ·  idf(t)2  ·  t.getBoost() ·  norm(t,d) )
t in q
Lucene Practical Scoring Function
tf(t in d) correlates to the term's frequency, defined as the number of times term t appears in the currently scored document d. Documents that have more occurrences of a given term receive a higher score.
tf(t in d)   =  frequency½

  public float tf(float freq) {
    return (float)Math.sqrt(freq);
  }

 idf(t) stands for Inverse Document Frequency. This value correlates to the inverse of docFreq (the number of documents in which the termt appears). This means rarer terms give higher contribution to the total score. idf(t) appears for t in both the query and the document, hence it is squared in the equation. 
idf(t)  =  1 + log (
numDocs
–––––––––
docFreq+1
)

  public float idf(long docFreq, long numDocs) {
    return (float)(Math.log(numDocs/(double)(docFreq+1)) + 1.0);
  }
coord(q,d) is a score factor based on how many of the query terms are found in the specified document. Typically, a document that contains more of the query's terms will receive a higher score than another document with fewer query terms. This is a search time factor computed in coord(q,d) by the Similarity in effect at search time. 
  public float coord(int overlap, int maxOverlap) {
    return overlap / (float)maxOverlap;
  }
queryNorm(q) is a normalizing factor used to make scores between queries comparable. This factor does not affect document ranking (since all ranked documents are multiplied by the same factor), but rather just attempts to make scores from different queries (or even different indexes) comparable. This is a search time factor computed by the Similarity in effect at search time. The default computation inDefaultSimilarity produces a Euclidean norm
queryNorm(q)   =   queryNorm(sumOfSquaredWeights)   =  
1
––––––––––––––
sumOfSquaredWeights½

The sum of squared weights (of the query terms) is computed by the query Weight object. For example, a BooleanQuery computes this value as: 

sumOfSquaredWeights   =   q.getBoost() 2  · ( idf(t)  ·  t.getBoost() 2
t in q
  public float queryNorm(float sumOfSquaredWeights) {
    return (float)(1.0 / Math.sqrt(sumOfSquaredWeights));
  }
 t.getBoost() is a search time boost of term t in the query q as specified in the query text (see query syntax), or as set by application calls to setBoost()

norm(t,d) encapsulates a few (indexing time) boost and length factors:
  • Field boost - set by calling field.setBoost() before adding the field to a document.
  • lengthNorm - computed when the document is added to the index in accordance with the number of tokens of this field in the document, so that shorter fields contribute more to the score. LengthNorm is computed by the Similarity class in effect at indexing.
The computeNorm(org.apache.lucene.index.FieldInvertState) method is responsible for combining all of these factors into a single float.
When a document is added to the index, all the above factors are multiplied. If the document has multiple fields with the same name, all their boosts are multiplied together:
norm(t,d)   =   lengthNorm  · f.boost()
field f in d named as t
Note that search time is too late to modify this norm part of scoring, e.g. by using a different Similarity for search.
  public float lengthNorm(FieldInvertState state) {
    final int numTerms;
    if (discountOverlaps)
      numTerms = state.getLength() - state.getNumOverlap();
    else
      numTerms = state.getLength();
   return state.getBoost() * ((float) (1.0 / Math.sqrt(numTerms)));
  }

  public final long computeNorm(FieldInvertState state) {
    float normValue = lengthNorm(state);
    return encodeNormValue(normValue);
  }
  public final long encodeNormValue(float f) {
    return SmallFloat.floatToByte315(f);
  }


Algorithm[recursive backtracking]: Generate all possible strings by replacing ? with 0 and 1

The original question from
http://www.codebytes.in/2014/11/coding-interview-question-given-string.html
Question: Given a string (for example: "a?bc?def?g"), write a program to generate all the possible strings by replacing ? with 0 and 1.

Example:
Input : a?b?c?
Output: a0b0c0, a0b0c1, a0b1c0, a0b1c1, a1b0c0, a1b0c1, a1b1c0, a1b1c1.


public static List<String> replaceBy0Or1(String src) {
  // store the ? positions
  Objects.requireNonNull(src, "scr can'tt be null");
  List<Integer> replaceIndex = new ArrayList<>();

  for (int i = 0; i < src.length(); i++) {
    if (src.charAt(i) == '?') {
      replaceIndex.add(i);
    }
  }

  StringBuilder sb = new StringBuilder(src);
  List<String> result = new ArrayList<>();
  helper(sb, replaceIndex, 0, result);

  return result;
}

public static void helper(StringBuilder sb, List<Integer> replaceIndex,
    int idx, List<String> result) {
  if (idx == replaceIndex.size()) {
    result.add(sb.toString());
    return;
  }

  sb.setCharAt(replaceIndex.get(idx), '0');
  helper(sb, replaceIndex, idx + 1, result);
  sb.setCharAt(replaceIndex.get(idx), '1');
  helper(sb, replaceIndex, idx + 1, result);
}

public static void main(String[] args) {
  replaceBy0Or1("?c?o?d?e??").stream().forEach(
      str -> System.out.println(str));
  replaceBy0Or1("???").stream().forEach(str -> System.out.println(str));
  replaceBy0Or1("?").stream().forEach(str -> System.out.println(str));
  replaceBy0Or1("a").stream().forEach(str -> System.out.println(str));
  replaceBy0Or1("").stream().forEach(str -> System.out.println(str));
}


Solr: Sort Group Ascendingly(asc_max) by Max Value in Each Group

User Case
In Solr Group, when sort=time asc_max, we want to sort group ascendingly by the the max(not min) value of time field in each group, vice verse, when sort=tme desc_max, we want to sort group descendingly by the the min(not max) value of time field in each group.

Back Ground
Using Solr group, we can group documents with a common field value, and use sort to specify how groups are sorted.
For example: sort=time asc&group.field=subject
Solr will

Check Result Grouping
When sort=time asc, the groups are sorted by the minimum value of time field in each group; when sort=time desc, the groups are sorted by the maximum value of time field in each group.

But in some cases, this is not what we wanted:
When sort=time asc, we want to sort the groups by the max(not min) value of time field in each group, vice verse when sort=time desc, we many want to sort the groups by the min(not max) value of time field in each group.

Solr doesn't support this. We have to figure it out how to implement it on our own.

The following is my first version of implementation: it works but there is still a lot to improve.

How Solr Group Works
Basically there are two phrases, first collect method AbstractFirstPassGroupingCollector(in our case:sort=time asc, it's TermFirstPassGroupingCollector) go thorough all docs that matches the query and filters,  its orderedGroups maintains the top X group based on the min(sort=mtm asc) or max(sort=mtm desc) value in each group, then Solr gets the top groups(AbstractFirstPassGroupingCollector.getTopGroups(int, boolean)). Then Solr calls AbstractSecondPassGroupingCollector(in this case TermSecondPassGroupingCollector) to get docs in the group.

FieldComparator & LongAbnormalComparator
FieldComparator plays one important role here: it compares hits so as to determine their sort order. Here we will create our custom LongAbnormalComparator, which will use Map minMaxMap to store the max value of each group in asc(asc_max) mode, store the min value of each group in desc(desc_min) mode. When compare group, it will use the values in minMaxMap  to compare group. 

One caveat here is that the size of  minMaxMap and values array has to be all group count in the index, so it can contain all groups.
public static interface AbnormalComparator { }
public static final class LongAbnormalComparator extends LongComparator implements  AbnormalComparator{
  private boolean reverse;
  protected final BytesRef[] slotGroupValues;
  protected final Map<BytesRef,Long> minMaxMap = new HashMap<BytesRef,Long>();
  BinaryDocValues groupTerms;
  private SortedDocValues index;
  private String groupField;

  LongAbnormalComparator(int numHits, String field, Parser parser,
      Long missingValue, boolean reverse, String groupField) {
    super(numHits, field, parser, missingValue);
    this.groupField = groupField;
    slotGroupValues = new BytesRef[numHits];
    this.reverse = reverse;
  }
  
  @Override
  public FieldComparator<Long> setNextReader(AtomicReaderContext context)
      throws IOException {
    index = FieldCache.DEFAULT.getTermsIndex(context.reader(), groupField);
    return super.setNextReader(context);
  }
  
  public boolean isCompetitive(int slot, int doc) {
    boolean isCompetitive = false;
    long v2 = currentReaderValues.get(doc);
    if (v2 == 0) {
      v2 = getMissingValue();
    }
    
    BytesRef groupBytes = getGroupFieldValue(doc);
    Long oldValue = minMaxMap.get(groupBytes);
    if (reverse) {
      if (oldValue == null) {
        isCompetitive = true;
      } else {
        if (v2 < oldValue) {
          isCompetitive = true;
        }
      }
    } else {
      if (oldValue == null) {
        isCompetitive = true;
      } else {
        if (v2 > oldValue) {
          isCompetitive = true;
        }
      }
    }
    return isCompetitive;
  }

  private BytesRef getGroupFieldValue(int doc) {
    BytesRef groupBytes = new BytesRef();
    index.get(doc, groupBytes);
    return groupBytes;
  }

  @Override
  public int compare(int slot1, int slot2) {
    // in abnormal mode, the treeset will be only created once - buildSortedSet
    // it compare max value
    BytesRef group1 = slotGroupValues[slot1];
    BytesRef group2 = slotGroupValues[slot2];
    final long v1 = minMaxMap.get(group1);
    final long v2 = minMaxMap.get(group2);
    if (v1 > v2) {
      return 1;
    } else if (v1 < v2) {
      return -1;
    } else {
      return 0;
    }
  }
  
  private long getMissingValue()
  {
    if(reverse)
    {
      // sort=time desc_min, 
      return Long.MAX_VALUE;
    }
    else
    {
   // sort=time asc_max
      return Long.MIN_VALUE;
    }
  }
  @Override
  public void copy(int slot, int doc) {
    long v2 = currentReaderValues.get(doc);
    if (v2 == 0) {
      v2 = getMissingValue();
    }
    
    BytesRef groupBytes = getGroupFieldValue(doc);
    Long oldValue = minMaxMap.get(groupBytes);
    
    // update maxValues if needed
    if (reverse) {
      if (oldValue == null) {
        update(slot, doc, v2, groupBytes);
      } else {
        // desc_min mode, only update if curr is smaller
        if (v2 < oldValue) {
          update(slot, doc, v2, groupBytes);
        }
      }
    } else {
      // asc_max mode, only update if curr is larger
      if (oldValue == null) {
        update(slot, doc, v2, groupBytes);
      } else {
        if (v2 > oldValue) {
          update(slot, doc, v2, groupBytes);
        }
      }
    }
    values[slot] = v2;
  }

  private void update(int slot, int doc, long v2, BytesRef groupBytes) {
    slotGroupValues[slot] = groupBytes;
    minMaxMap.put(groupBytes, v2);
  }
  
 public long[] getValues() {
    return values;
  }
Change in TermFirstPassGroupingCollector
public class TermFirstPassGroupingCollector extends AbstractFirstPassGroupingCollector<BytesRef> {
  private String groupField;
  private boolean hasAbnormal;

  public TermFirstPassGroupingCollector(String groupField, Sort groupSort,
      int topNGroups) throws IOException {
    super();
    if (topNGroups < 1) {
      throw new IllegalArgumentException("topNGroups must be >= 1 (got "
          + topNGroups + ")");
    }
    final SortField[] sortFields = groupSort.getSort();
    for (int i = 0; i < sortFields.length; i++) {
      final SortField sortField = sortFields[i];      
      sortField.isAbnormal();
      if (sortField.isAbnormal()) {
        hasAbnormal = true;
        break;
      }
    }
    if (!hasAbnormal) {
      super.init(groupSort, topNGroups);
      return;
    }
    Integer groupCount = groupCountTL.get();
    subInit(groupSort, groupCount, sortFields, groupField);
  }
  
  private static final ThreadLocal<Integer> groupCountTL= new ThreadLocal<Integer>();
  public static void setGroupCountTLValue(int groupCount)
  {
    groupCountTL.set(groupCount);
  }
  public static void removeGroupCountTL()
  {
    groupCountTL.remove();
  }
  private void subInit(Sort groupSort, int topNGroups,
      final SortField[] sortFields, String groupField) throws IOException {
    this.groupField = groupField;
    this.groupSort = groupSort;
    this.topNGroups = topNGroups;
    
    comparators = new FieldComparator[sortFields.length];
    compIDXEnd = comparators.length - 1;
    reversed = new int[sortFields.length];
    for (int i = 0; i < sortFields.length; i++) {
      final SortField sortField = sortFields[i];
      comparators[i] = sortField.getComparatorWithAbnormal(topNGroups, i,
          groupField);
      reversed[i] = sortField.getReverse() ? -1 : 1;
    }
    
    spareSlot = topNGroups;
    groupMap = new HashMap<BytesRef,CollectedSearchGroup<BytesRef>>(topNGroups);
  }
  
  @Override
  public void collect(int doc) throws IOException {
    if(!hasAbnormal)
    {
      super.collect(doc);
      return;
    }
    final BytesRef groupValue = getDocGroupValue(doc);
    final CollectedSearchGroup<BytesRef> group = groupMap.get(groupValue);

    if (group == null) {
        // Add a new CollectedSearchGroup:
        CollectedSearchGroup<BytesRef> sg = new CollectedSearchGroup<BytesRef>(comparators);
        sg.groupValue = copyDocGroupValue(groupValue, null);
        sg.comparatorSlot = groupMap.size();
        sg.topDoc = docBase + doc;
        for (FieldComparator<?> fc : comparators) {
          fc.copy(sg.comparatorSlot, doc);
        }
        groupMap.put(sg.groupValue, sg);
        return;
    }
    // Update existing group:
    for (int compIDX = 0;; compIDX++) {
      final FieldComparator<?> fc = comparators[compIDX];
      
      if (fc instanceof LongAbnormalComparator) {
        LongAbnormalComparator my = (LongAbnormalComparator) fc;
        if (!my.isCompetitive(group.comparatorSlot, doc)) {
          return;
        }
        else
        {
          fc.copy(spareSlot, doc);
          // Definitely competitive; set remaining comparators:
          for (int compIDX2 = compIDX + 1; compIDX2 < comparators.length; compIDX2++) {
            comparators[compIDX2].copy(spareSlot, doc);
          }
          break;
        }
      } else {
        fc.copy(spareSlot, doc);
        int c = reversed[compIDX] * fc.compare(group.comparatorSlot, spareSlot);
        if (c < 0) {
          // Definitely not competitive.
          return;
        } else if (c > 0) {
          // Definitely competitive; set remaining comparators:
          for (int compIDX2 = compIDX + 1; compIDX2 < comparators.length; compIDX2++) {
            comparators[compIDX2].copy(spareSlot, doc);
          }
          break;
        } else if (compIDX == compIDXEnd) {
          // Here c=0. If we're at the last comparator, this doc is not
          // competitive, since docs are visited in doc Id order, which means
          // this doc cannot compete with any other document in the queue.
          return;
        }
      }
    }

    // Remove before updating the group since lookup is done via comparators
    // TODO: optimize this
    final CollectedSearchGroup<BytesRef> prevLast;
    if (orderedGroups != null) {
      prevLast = orderedGroups.last();
      orderedGroups.remove(group);
//      assert orderedGroups.size() == topNGroups-1;
    } else {
      prevLast = null;
    }

    group.topDoc = docBase + doc;

    // Swap slots
    final int tmp = spareSlot;
    spareSlot = group.comparatorSlot;
    group.comparatorSlot = tmp;

    // Re-add the changed group
    if (orderedGroups != null) {
      orderedGroups.add(group);
//      assert orderedGroups.size() == topNGroups;
      final CollectedSearchGroup<?> newLast = orderedGroups.last();
      // If we changed the value of the last group, or changed which group was last, then update bottom:
      if (group == newLast || prevLast != newLast) {
        for (FieldComparator<?> fc : comparators) {
          fc.setBottom(newLast.comparatorSlot);
        }
      }
    }
  }
}
What is Missing
Update solr.search.QueryParsing.StrParser.getSortDirection() to parse the aort string, when sort is like: asc_max, desc_min, set SortField abnormal value to true.
We need one Wrapper Request Handler, when sort is like time asc_max or time desc_min, it will first use TermAllGroupsCollector to get all group count.


Resources
Solr Join: Return Parent and Child Documents
Use Solr map function query(group.sort=map(type,1,1,-1) ) in group flat mode
Solr: Update other Document in DocTransformer by Writing custom SolrWriter
Solr: Use DocTransformer to dynamically Generate groupCount and time value for group doc

Solr DateToLongTransfomer: Convert Date to Milliseconds

Scenario
During develop and debug one Solr feature, I constantly need check the value of date field.
In response, the date field is string, but in eclipse debug mode, the value is long(milliseconds from epoch time) - as Solr actually stores the milliseconds in index, so why not write one transformer that add the milliseconds into response.

Here it is.
DateToLongTransfomerFactory
public class DateToLongTransfomerFactory extends TransformerFactory {
  @Override
  public DocTransformer create(String field, SolrParams params,
      SolrQueryRequest req) {
    return new DateToLongTransfomer(field, params);
  }
  
  /**
   * org.apache.solr.search.SolrReturnFields.parseFieldList(String[],
   * SolrQueryRequest) DocTransformers augmenters = new DocTransformers();
   * 
   * DocTransformer is thread safe.
   */
  private static class DateToLongTransfomer extends DocTransformer {
    private String fl;
    private String field;
    
    public DateToLongTransfomer(String field, SolrParams params) {
      // field is the name of transformer [dateToLong]
      this.field = field;
      fl = Preconditions.checkNotNull(params.get("fl"),
          "fl can't be null in transfromer");
    }
    
    @Override
    public void transform(SolrDocument doc, int docid) throws IOException {
      String fieldValue = getFieldValue(doc, fl);
      if (fieldValue != null) {
        doc.addField(field, fieldValue);
      }
    }
    
  public static String getFieldValue(SolrDocument doc, String field) {
    List<String> rst = new ArrayList<String>();
    Object obj = doc.get(field);
    getFieldvalues(doc, rst, obj);
    
    if (rst.isEmpty()) {
      return null;
    }
    return rst.get(0);
  }    

  public static void getFieldvalues(SolrDocument doc, List<String> rst,
      Object obj) {
    if (obj == null) return;
    if (obj instanceof org.apache.lucene.document.Field) {
      org.apache.lucene.document.Field field = (Field) obj;
      String oldValue = field.stringValue();
      if (oldValue != null) {
        rst.add(oldValue);
      }
    } else if (obj instanceof IndexableField) {
      IndexableField field = (IndexableField) obj;
      String oldValue = field.stringValue();
      if (oldValue != null) {
        rst.add(oldValue);
      }
    } else if (obj instanceof Collection) {
      Collection colls = (Collection) obj;
      for (Object newObj : colls) {
        getFieldvalues(doc, rst, newObj);
      }
    } else {
      rst.add(obj.toString());
      // throw new RuntimeException("When this is called? obj.type:"
      // + obj.getClass());
    }
  }    
  }
}

Labels

Java (159) Lucene-Solr (112) Interview (61) All (58) J2SE (53) Algorithm (45) Soft Skills (38) Eclipse (33) Code Example (31) Linux (25) JavaScript (23) Spring (22) Windows (22) Web Development (20) Tools (19) Nutch2 (18) Bugs (17) Debug (16) Defects (14) Text Mining (14) J2EE (13) Network (13) Troubleshooting (13) PowerShell (11) Chrome (9) Design (9) How to (9) Learning code (9) Performance (9) Problem Solving (9) UIMA (9) html (9) Http Client (8) Maven (8) Security (8) bat (8) blogger (8) Big Data (7) Continuous Integration (7) Google (7) Guava (7) JSON (7) Shell (7) ANT (6) Coding Skills (6) Database (6) Lesson Learned (6) Programmer Skills (6) Scala (6) Tips (6) css (6) Algorithm Series (5) Cache (5) Dynamic Languages (5) IDE (5) System Design (5) adsense (5) xml (5) AIX (4) Code Quality (4) GAE (4) Git (4) Good Programming Practices (4) Jackson (4) Memory Usage (4) Miscs (4) OpenNLP (4) Project Managment (4) Spark (4) Testing (4) ads (4) regular-expression (4) Android (3) Apache Spark (3) Become a Better You (3) Concurrency (3) Eclipse RCP (3) English (3) Happy Hacking (3) IBM (3) J2SE Knowledge Series (3) JAX-RS (3) Jetty (3) Restful Web Service (3) Script (3) regex (3) seo (3) .Net (2) Android Studio (2) Apache (2) Apache Procrun (2) Architecture (2) Batch (2) Bit Operation (2) Build (2) Building Scalable Web Sites (2) C# (2) C/C++ (2) CSV (2) Career (2) Cassandra (2) Distributed (2) Fiddler (2) Firefox (2) Google Drive (2) Gson (2) How to Interview (2) Html Parser (2) Http (2) Image Tools (2) JQuery (2) Jersey (2) LDAP (2) Life (2) Logging (2) Python (2) Software Issues (2) Storage (2) Text Search (2) xml parser (2) AOP (1) Application Design (1) AspectJ (1) Chrome DevTools (1) Cloud (1) Codility (1) Data Mining (1) Data Structure (1) ExceptionUtils (1) Exif (1) Feature Request (1) FindBugs (1) Greasemonkey (1) HTML5 (1) Httpd (1) I18N (1) IBM Java Thread Dump Analyzer (1) JDK Source Code (1) JDK8 (1) JMX (1) Lazy Developer (1) Mac (1) Machine Learning (1) Mobile (1) My Plan for 2010 (1) Netbeans (1) Notes (1) Operating System (1) Perl (1) Problems (1) Product Architecture (1) Programming Life (1) Quality (1) Redhat (1) Redis (1) Review (1) RxJava (1) Solutions logs (1) Team Management (1) Thread Dump Analyzer (1) Visualization (1) boilerpipe (1) htm (1) ongoing (1) procrun (1) rss (1)

Popular Posts