Solr RssResponseWriter by Extending XMLWriter


The problem
Customers want to show search results from Solr in Rss reader, so we need customize Solr Response  Rss format.
There are several ways to do this in Solr:
1. We can use XSLT Response Writer: write a xslt transformer to transform the xml to Rss format. Check XsltResponseWriter:
wt=xslt&tr=example_rss.xsl
We can change the example_rss.xsl or example_atom.xsl Solr provided to match our need.
2. We can write our own Solr ResponseWriter class to write the Rss format response as described in this post.
Solr ResponseWriter
Solr defines several Response Writers, such as XMLResponseWriter, XsltResponseWriter, CSVResponseWriter, etc.
TextResponseWriter is the base class for text-oriented response writers. Solr also allows us to define our own new Response Writers.
The Solution
As the format of Rss is similar as the Solr XML response, we can try to extend XMLResponseWriter and reuse existing code as much as possible.

The difference between Solr XML and Expected RSS format
1. The overall structural difference
In Solr, the format is like: response->result->doc. 
In Rss, the format is like below:
<rss version="2.0">
  <channel>
    <title>title here</title> //channel metadata
    <link>link here</link>    //channel metadata
    <description>description here</description> //channel metadata
    <item>
       <title>item1 title</title> //item metadata
       <link>item1 link</link> //item metadata
       <description>item1 description</description> //item metadata
    </item>
  </channel>
</rss>
For this, we need update writeResponse method to change overall structure.
2. The element structural difference
In Solr, element format is like:
<element-type name="id"> //  like str, int, arr
</element-type>
In Rss, 
<element-name name="id"> //  like title, link, etc
</element-name>
For this, we need udate writeStr/Int/Long implementation.
3. File Name mapping
The field name in Solr may no be expected, for example we may want to map field "url" to "link". We can define a new parameter flmap. We can define the mapping in solrconfig.xml.
<str name="fl">title,url,id,score,physicalpath</str>

<str name="flmap">title,link,,,,physicalpath</str> 
In above example, url will be renamed to link, field id, score would be ignored. title, physicalpath would remain same.
Or we can add fl, flmap as request parameters.

RssResponseWriter Implementation
import com.google.common.base.CharMatcher;
import com.google.common.base.Splitter;
import com.google.common.collect.Lists;

public class RssWriter extends XMLWriter {
  private static final Splitter split = Splitter.on(CharMatcher.anyOf(","))
      .trimResults();
  private static final char[] XML_START1 = "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n"
      .toCharArray();
  private Map<String,String> oldToNewFLMapping = new HashMap<String,String>();
  private String baseURL;
  
  public RssWriter(Writer writer, SolrQueryRequest req, SolrQueryResponse rsp)
      throws IOException {
    super(writer, req, rsp);
    SolrParams solrParams = req.getParams();
    String fl = solrParams.get("fl");
    
    
    String flmap = solrParams.get("flmap");
    if (fl == null || flmap == null) {
      throw new IOException("do not get fl or flmap parameter");
    }
    
    ArrayList<String> oldFLs = Lists.newArrayList(split.split(fl));
    ArrayList<String> newFLs = Lists.newArrayList(split.split(flmap));
    if (oldFLs.size() != newFLs.size()) {
      throw new IOException("field count different in fl and rnamefl parameter");
    }
    
    Iterator<String> oldIt = oldFLs.iterator(), newIt = newFLs.iterator();
    while (newIt.hasNext()) {
      String oldFl = oldIt.next();
      String newFl = newIt.next();
      if (!StringUtils.isBlank(newFl)) {
        oldToNewFLMapping.put(oldFl, newFl);
      }
    }
    getBaseUrl(req);
    
  }
  @Override
  public void writeResponse() throws IOException {
    writer.write(XML_START1);
    writer.write("<rss version=\"2.0\">");
    writer.write("<channel>");
    String qstr = req.getParams().get(CommonParams.Q);
    writeVal("title", qstr);
    String fullUrl = req.getContext().get("fullUrl").toString();
    writeCdata("link", fullUrl);
    writeVal("copyright", "Copyright ......");
    
    NamedList<?> lst = rsp.getValues();
    Object obj = lst.get("response");
    DocList docList = null;
    if (obj instanceof ResultContext) {
      ResultContext context = (ResultContext) obj;
      docList = context.docs;
    } else if (obj instanceof DocList) {
      docList = (DocList) obj;
    } else {
      throw new RuntimeException("Unkown type: " + obj.getClass());
    }
    writeVal("numFound", Integer.toString(docList.matches()));
    writeVal("start", Integer.toString(docList.offset()));
    writeVal("maxScore", Float.toString(docList.maxScore()));
    
    Set<String> fields = new HashSet<String>(oldToNewFLMapping.keySet());
    SolrIndexSearcher searcher = req.getSearcher();
    DocIterator iterator = docList.iterator();
    int sz = docList.size();
    for (int i = 0; i < sz; i++) {
      int id = iterator.nextDoc();
      Document doc = searcher.doc(id, fields);
      writeVal("item", doc);
    }
    writer.write("\n</channel>");
    writer.write("\n</rss>");
  } 
  @Override
  public void writeSolrDocument(String name, SolrDocument doc,
      ReturnFields returnFields, int idx) throws IOException {
    startTag("item", false);
    incLevel();
    boolean hasLink = false;
    
    Set<String> oldFLs = oldToNewFLMapping.keySet();
    for (String oldFL : returnFields.getLuceneFieldNames()) {
      String newName = oldFL;
      if (oldFLs.contains(oldFL)) {
        newName = oldToNewFLMapping.get(oldFL);
      }
      Object val = doc.getFieldValue(oldFL);
      writeVal(newName, val);
      if ("link".equalsIgnoreCase(newName)) {
        hasLink = true;
      }
    }
    if (!hasLink) {
      String uniqueKey = schema.getUniqueKeyField().getName();
      String uniqueKeyValue = "";
      if (uniqueKey != null) {
        Object obj = doc.getFieldValue(uniqueKey);
        if (obj instanceof Field) {
          Field field = (Field) obj;
          uniqueKeyValue = field.stringValue();
        } else {
          uniqueKeyValue = obj.toString();
        }
      }
      writeCdata("link", baseURL + "viewsourceservlet?docid=" + uniqueKeyValue);
    }
    decLevel();
    if (doIndent) indent();
    writer.write("</item>");
  }
  @Override
  public void writeArray(String name, Iterator iter) throws IOException {
    if (iter.hasNext()) {
      incLevel();
      while (iter.hasNext()) {
        writeVal(name, iter.next());
      }
      decLevel();
    } else {
      startTag(name, true);
    }
  }
  @Override
  public void writeStr(String name, String val, boolean escape)
      throws IOException {
    writePrim(name, val, escape);
  }
  public void writeCdata(String tag, String val) throws IOException {
    writer.write("<" + tag + ">");
    writer.write("<![CDATA[" + val + "]]>");
    writer.write("</" + tag + ">");
  }
  private void writePrim(String name, String val, boolean escape)
      throws IOException {
    int contentLen = val == null ? 0 : val.length();
    
    startTag(name, contentLen == 0);
    if (contentLen == 0) return;
    
    if (escape) {
      XML.escapeCharData(val, writer);
    } else {
      writer.write(val, 0, contentLen);
    }
    writer.write('<');
    writer.write('/');
    writer.write(name);
    writer.write('>');
  }  
  void startTag(String name, boolean closeTag) throws IOException {
    if (doIndent) indent();
    
    writer.write('<');
    writer.write(name);
    if (closeTag) {
      writer.write("/>");
    } else {
      writer.write('>');
    }
  }
  public void getBaseUrl(SolrQueryRequest req) {
    String url = req.getContext().get("url").toString();
    int i = 0;
    int j = 0;
    for (j = 0; j < url.length() && i < 3; ++j) {
      if (url.charAt(j) == '/') {
        ++i;
      }
    }
    baseURL = url.substring(0, j);
  }
  
  @Override
  public void writeNull(String name) throws IOException {
    writePrim(name, "", false);
  }
  
  @Override
  public void writeInt(String name, String val) throws IOException {
    writePrim(name, val, false);
  }
  
  @Override
  public void writeLong(String name, String val) throws IOException {
    writePrim(name, val, false);
  }
  
  @Override
  public void writeBool(String name, String val) throws IOException {
    writePrim(name, val, false);
  }
  
  @Override
  public void writeFloat(String name, String val) throws IOException {
    writePrim(name, val, false);
  }
  
  @Override
  public void writeDouble(String name, String val) throws IOException {
    writePrim(name, val, false);
  }
  
  @Override
  public void writeDate(String name, String val) throws IOException {
    writePrim(name, val, false);
  }
}
RSSResponseWriter
public class RSSResponseWriter implements QueryResponseWriter {
  public void write(Writer writer, SolrQueryRequest req, SolrQueryResponse rsp)
      throws IOException {
    RssWriter rssWriter = new RssWriter(writer, req, rsp);
    try {
      rssWriter.writeResponse();
    } finally {
      rssWriter.close();
    }
  }
  public String getContentType(SolrQueryRequest request,
      SolrQueryResponse response) {
    return CONTENT_TYPE_XML_UTF8;
  }
  public void init(NamedList args) {}
}
Configuration
<requestHandler name="/rss" class="solr.SearchHandler">
 <lst name="defaults">
  <str name="rows">10</str>
  <str name="wt">rss</str>
  <!--default mapping-->
  <str name="fl">title,url,id,score,physicalpath</str>
  <str name="flmap">title,link,,,,physicalpath</str> 
 </lst>
</requestHandler>
<queryResponseWriter name="rss" class="org.apache.solr.response.RSSResponseWriter"/>
Resources
QueryResponseWriter
Solr Search Result (attribute-to-tag) customization using XsltResponseWriter
RSS 2.0 Specification
RSS Tutorial

Solr: Add new fields with Default Value for Existing Documents


In some cases, we have to upgrade existing Solr application to add new fields, and we don't want or can't reindex old data.
For example, in old solr application, we only store information about regular file. Now, we need to upgrade it to store other types of files, so we want to add a field: fileType. fileType=0 means it is a regular file, fileType=1 means it's a folder. In future we may add other types of file.

We add the following definition in schema.xml:
<field name="fileType" type="TINT" indexed="true" stored="true" default="-1"/>

Adding this definition in schema.xml doesn't affect existing data: they still don't have fileType field. No fileType value in the response, also no term in fileType field for query: query fileType:[* TO *] returns empty result.

To fix this issue, we have to consider two parts, one is search query, one is search response.
Fix Search Query by Querying NULL Field
As all old data is about regular file, so it means if no value for fileType, then it's regular file. 
When we search regular file: the query should be adjusted as below: It will fetche data where value of fileType is 0, or no value for fileType.
-(-fileType:0 AND fileType:[* TO *])

No change needed when search other field types. We can wrap this change in our own search handler: if the query includes fileType:0, change it to -(-fileType:0 AND fileType:[* TO *]), or we can write a new query parser.
Fix Search Response by Using DocTransformer to Add Default Value
For the old data, there is no value for fileType. We need add fileType=0 in the search response. To do this, we can define a Solr DocTransformer
DocTransformer allow us to modify fields that are returned to the user. 
In our DocTransformer, we can check the value for fileType, if there is no value, set its value as the default value. Now in the response date(xml or json), it will show fileType=0 for old data.  
NullDefaultValueTransformerFactory Implementation
public class NullDefaultValueTransformerFactory extends TransformerFactory {
  private Map<String,String> nullDefaultMap = new HashMap<String,String>();
  private boolean enabled = false;
  protected static Logger logger = LoggerFactory
      .getLogger(NullDefaultValueTransformerFactory.class);
  public void init(NamedList args) {
    super.init(args);
    if (args != null) {
      SolrParams params = SolrParams.toSolrParams(args);
      enabled = params.getBool("enabled", false);
      if (!enabled) return;
      
      List<String> fieldNames = new ArrayList<String>();
      String str = params.get("fields");
      if (str != null) {
        fieldNames = StrUtils.splitSmart(str, ',');
      }
      List<String> nullDefaultvalue = new ArrayList<String>();
      str = params.get("nullDefaultValue");
      if (str != null) {
        nullDefaultvalue = StrUtils.splitSmart(str, ',');
      }
      if (fieldNames.size() != nullDefaultvalue.size()) {
        logger.error("Size doesn't match, fieldNames.size: "
            + fieldNames.size() + ",nullDefaultvalue.size: "
            + nullDefaultvalue.size());
        enabled = false;
      } else {
        if (fieldNames.isEmpty()) {
          logger.error("No fields are set.");
          enabled = false;
        }
      }
      
      for (int i = 0; i < fieldNames.size(); i++) {
        nullDefaultMap.put(fieldNames.get(i).trim(), nullDefaultvalue.get(i)
            .trim());
      }
    }
  }
  public DocTransformer create(String field, SolrParams params,
      SolrQueryRequest req) {
    return new NullDefaultValueTransformer();
  }
  
  class NullDefaultValueTransformer extends DocTransformer {
    public String getName() {
      return NullDefaultValueTransformer.class.getName();
    }
    public void transform(SolrDocument doc, int docid) throws IOException {
      if (enabled) {
        Iterator<Entry<String,String>> it = nullDefaultMap.entrySet()
            .iterator();
        while (it.hasNext()) {
          Entry<String,String> entry = it.next();
          String fieldName = entry.getKey();
          Object obj = doc.getFieldValue(fieldName);
          if (obj == null) {
            doc.setField(fieldName, entry.getValue());
          }
        }
      }
    }
  }
}
With the previous 2 changes, the client application can kind of think the old data has default value 0 for fieldType. Be aware that some functions will not work, such as sort, stats.
Resources
Solr: Use DocTransformer to Change Response
Searching for date range or null/no field in Solr
Solr DocTransformers

Be Sure to Reuse SolrServer and Apache HttpClient Instance


The Problem
In one solr-related application, we use SolrJ HttpSolrServer to push data into remote Solr server. In one load testing machine, we see SocketConnection Exception intermittently in remote Solr Server. 
Caused by: org.apache.solr.client.solrj.SolrServerException: java.net.SocketException: Connection reset
at org.apache.solr.client.solrj.impl.HttpSolrServer.request (HttpSolrServer.java:340)
Although for each request, we will retry several times. But we are still wondering what caused this exception and the network usage in the remote Solr server.
Use TCPView in Windows to Monitor Network
In linux, we can use lsof -i -p pid to list all port a process opens. In Windows, we can use TCPView
from Windows Sysinternals to monitor tcp connections. 

We see there are a thousands of connections in TIME_WAIT, CLOSE_WAIT, FIN_WAIT2 status in client and remote Solr server. This looks suspicious.
HttpSolrServer Usage Pattern
As the main logic in client is to use HttpSolrServer to send requests to Solr server. So we suspect there is something wrong when use HttpSolrServer.
Checked the code, the client program uses java Threadpool to send request, in each thread, it creates a HttpSolrServer, send a requests, then shutdown the SolServer.

Looked at wike page in SolrJ
HttpSolrServer is thread-safe and if you are using the following constructor, you *MUST* re-use the same instance for all requests.  If instances are created on the fly, it can cause a connection leak. The recommended practice is to keep a static instance of HttpSolrServer per solr server url and share it for all requests.

We changed the code to use a global static HttpSolrServer instance. Then use TCPView to monitor network usage after the change. Now there are only less than 20 connections open. 

Same logic applies when we use Apache HttpClient: Use one single HttpClient instance for each remote host.
Lesson Learned
Use tools in Windows Sysinternals and TCPView 
Reuse HttpSolrServer and Apache HttpClient instance.
Test Program
If you run the following test program, and monitor with TCPView, you can immediately notice the difference.
@Test
public void correctUsageSolrServer() throws InterruptedException {
 final String baseURL = "SOLRURL";
 final HttpSolrServer solrServer = new HttpSolrServer(baseURL);
 
 int i = 0;
 while (i++ < 1000) {
  Thread thread = new Thread() {
   @Override
   public void run() {
    SolrQuery query = new SolrQuery("*:*");
    try {
     QueryResponse rsp = solrServer.query(query);
     System.out.println(rsp);
    } catch (Exception e) {
     e.printStackTrace();
     throw new RuntimeException(e);
    }
   }
  };
  
  thread.start();
  Thread.sleep(10);
 }
 // be sure to shutdown solrServer at the end.
 solrServer.shutdown();
}

@Test
public void wrongUsageSolrServer() throws InterruptedException {
 final String baseURL = "SOLRURL";
 
 int i = 0;
 while (i++ < 1000) {
  Thread thread = new Thread() {
   @Override
   public void run() {
    final HttpSolrServer solrServer = new HttpSolrServer(baseURL);
    
    SolrQuery query = new SolrQuery("*:*");
    try {
     QueryResponse rsp = solrServer.query(query);
     System.out.println(rsp);
    } catch (Exception e) {
     e.printStackTrace();
     throw new RuntimeException(e);
    } finally {
     solrServer.shutdown();
    }
   }
  };
  
  thread.start();
  Thread.sleep(10);
 }
}
Resources
SolrJ Wiki
Connection Close in HttpClient
HttpClient Connection Management FAQ and Explanation of TIME_WAIT status
TCP Connection Termination Order

Trouble Shooting Apache Procrun Unable to Start Service Problem


Today, I am debugging the batch script that wraps the embedded jetty+solr application as windows service. The script looks like:
Windows BAT: Using Apache Procrun to Install Java Application As Windows Service

Then I run installService.bat -service_name service1 -start_params "start;-port;9753" -stop_params shutdown, it installs the service successfully. But when I click service1.exe and try to start the service, it fails silently. No any (error) message logged in commons-daemon.log or service1.stderr/stdout.log. But when I run prunsrv //TS//service1 to debug the service, it runs well.

Looks weird. If procrun logged error message in some place, it would be much easier to find the root cause of the problem.
View Procrun Error Message in Event Viewer
To view the error log, we have to view Windows Event Log: Go to Control Panel, and search Event Viewer, open it. Select "Windows Log" -> "System", reproduce the problem by starting the service in service1 service GUI, we can see one new error log:
The service1 service failed to start due to the following error: 
The system cannot find the file specified.

Root Cause of The system cannot find the file specified
Google search "The system cannot find the file specified". Find post Build windows service from java application with procrun:
If you use (correct) relative paths to files(especially for prunmgr.exe) in the installation script, the service will install correctly and it will run fine in debugging mode. It will however fail when run normally with any administrative tooling you have.
Generally, you should use absolute paths with procrun.

Check the service1 GUI, in general tab. it indeed uses prunsrv //RS//service1.
Check the script, I use:
"%PRUNSRV%" //IS//%SERVICE_JAVA% --Install="%MYPATH%prunsrv"
But somehow I removed the definition of MYPATH in the script. Update the script:
set "PRUNSRV=%%~dp0/%prunsrv.exe"
"%PRUNSRV%" //IS//%SERVICE_JAVA% --Install="%PRUNSRV%"
Now it works.
Lesson Learned
Use absolute path in installation script.
Don't use white spaces in the service name.
Use System Event Log to view prunsrv error message.

Resources
Build windows service from java application with procrun
https://github.com/lenhard/procrun-sample
Windows BAT: Using Apache Procrun to Install Java Application As Windows Service
http://commons.apache.org/proper/commons-daemon/procrun.html

Creating Custom Solr Type to Stream Large Text Field


The Problem
In my project, we run some queries in Solr server, and return combined response back to client(we can't use SolrJ as the request goes through some proxy applications which add extra functions). But some text fields are too large, we would like to reduce their size, so the transfer content would be much less.
Steps We Have Done
To reduce the content size,  at remote Solr server we use GZipOutputStream and Bse64OutputStream to zip the string, this can reduce the size by more than 85%: Original 134 mb String is compressed to 16mb.
Read more: Java: Use ZIP Stream and Base64 Stream to Compress Large String Data
Java: Use Zip Stream and Base64 Encoder to Compress Large String Data 
At client side, when it receives the zipped base64 string, it first Base64 decodes it, uncompress it, then add it as a field into Solr. 

But If we do all this in memory, it will load the huge original unzipped string 134mb into memory. It will cause the application OutOfMemoryError. Obviously this is not desired.

We want to first use stream(Base64InputStream and GZipInputStream) to unzip it, and write original string into a temp file. When Solr add this field into Solr, it can use Reader to read from the temp file, after it’s done, it can delete the temp file.

In Lucene, we can provide a Reader as a parameter to Field constructor, Lucene will consume the reader and close it after it’s done. 
The field can only indexed, not stored. But this is fine for us, as this field is only used for search.

Solr doesn't expose this function, but we can easily extend Solr to define a custom field which accepts a Reader.

The Solution
Custom Solr Field Type: FileTextField
FileTextField extends solr.schema.TextField. When add value to FileTextField, the value can be a string, or a reader. If it's a reader, createField will create a Lucene Field with Reader as parameter: Field f = new Field(name, fr, type); Lucene will consume the reader, and close it after it's done.
FileTextField has one configuration parameter: deleteFile. If true, it will delete the file after Lucene has read the file and written it to index. If false, it will keep this file. We have to set encoding in its constructors: this can avoid the problem: different encodings are used when write and read the file.
public class FileTextField extends TextField {
  private boolean deleteFile;
  protected void init(IndexSchema schema, Map<String,String> args) {
    String str = args.remove("deleteFile");
    if (str != null) {
      deleteFile = Boolean.parseBoolean(str);
    }
    super.init(schema, args);
  }

  public IndexableField createField(SchemaField field, Object value, float boost) {
    if (!field.indexed() && !field.stored()) {
      if (log.isTraceEnabled()) log.trace("Ignoring unindexed/unstored field: "
          + field);
      return null;
    }
    if (value instanceof FileHolder) {
      FileHolder fileHolder = (FileHolder) value;
      fileHolder.setDeleteFile(deleteFile);
      Reader reader;
      try {
        reader = fileHolder.getReader();
        return createFileTextField(field.getName(), reader, boost);
      } catch (IOException e) {
        throw new RuntimeException(e);
      }
    } else {
      return super.createField(field, value, boost);
    }
  }
  
  public Field createFileTextField(String name, Reader fr,
      float boost) {
    Field f = new org.apache.lucene.document.TextField(name, fr);
    f.setBoost(boost);
    return f;
  }
}
FileHolder 
In order to get the file path, we create a wrapper FileHolder, it's get reader method return an InputStreamReader. If deleteFile is true, its close method will delete the file after close the stream.
public static class FileHolder {
 private String filePath;
 private boolean deleteFile;
 private String encoding;
 public FileHolder(File file, String encoding)
   throws FileNotFoundException, UnsupportedEncodingException {
  this.filePath = file.getAbsolutePath();
  this.encoding = encoding;
 }
 public void setDeleteFile(boolean deleteFile) {
  this.deleteFile = deleteFile;
 }
  // @return  an InputStreamReader, if deleteFile is true, it will delete the file when the reader is closed.
 public Reader getReader() throws IOException {
  InputStreamReader reader = new InputStreamReader(new FileInputStream(
    filePath), encoding) {
   @Override
   public void close() throws IOException {
    super.close();
    if (deleteFile) {
     boolean deleted = new File(filePath).delete();
     if (!deleted) {
      log.error("Unable to delete " + filePath);
     }
    }
   }
  };
  return reader;
 }
}
Define FileTextField field in schema FileTextField is similar as Solr Text Field, we can define tokenizer and filters for index and query. It accepts an additional parameter: deleteFile. The value of stored can't be true. Read Document from Stream and add FileTextField into Solr The following code uses GSon's streaming JsonReader to read one document. We can determine size of zippedcontent by size field. If it's too large, we will write the uncompressed string into a temporary file and add a FleHodler instance to content field. 
private static Future<Void> readOneDoc(JsonReader reader, SolrQueryRequest request)
  throws IOException {
String str;
reader.beginObject();
long size = 0;
Object unzippedcontent = null;
boolean useFileText = false;
SolrInputDocument solrDoc = new SolrInputDocument();
while (reader.hasNext()) {
  str = reader.nextName();
  if ("size".equals(str)) {
 size = Long.parseLong(reader.nextString());
 if (size > size_LIMIT_FILETEXT) {
   useFileText = true;
 }
  } else if ("zippedcontent".equals(str) && reader.peek() != JsonToken.NULL) {
 if (useFileText) {
   // unzippedcontent is a FleHodler
   unzippedcontent = unzipValueToTmpFile(reader);
 } else {
   // decoded and uncompressed string
   unzippedcontent = unzipValueDirectly(reader);
 }
  } else {
 // in case, we change server side code.
 reader.skipValue();
  }
}
reader.endObject();
// add it to solr
UpdateRequestProcessorChain updateChian = request.getCore()
   .getUpdateProcessingChain("/update");
AddUpdateCommand command = new AddUpdateCommand(request);
command.solrDoc = solrDoc;
UpdateRequestProcessor processor = updateChian.createProcessor(request, new SolrQueryResponse());
processor.processAdd(command); 
}

private static String unzipValueDirectly(JsonReader reader)
  throws IOException {
String value = reader.nextString();
ZipInputStream zi = null;
try {
  Base64InputStream base64is = new Base64InputStream(new ByteArrayInputStream(
   value.getBytes("UTF-8")));
  zi = new ZipInputStream(base64is);
  zi.getNextEntry();
  return IOUtils.toString(zi);
} finally {
  IOUtils.closeQuietly(zi);
}
}

private static FileHolder unzipValueToTmpFile(JsonReader reader) throws IOException {
File tmpFile = File.createTempFile(TMP_FILE_PREFIX_ZIPPEDCONTENT, TMP_FILE_PREFIX_SUFFIX);
String value = reader.nextString();
ZipInputStream zi = null;
OutputStreamWriter osw = null;

try {
 Base64InputStream base64is = new Base64InputStream(new ByteArrayInputStream(
   value.getBytes("UTF-8")));
  zi = new ZipInputStream(base64is);
  zi.getNextEntry();
  osw = new OutputStreamWriter(new FileOutputStream(tmpFile), "UTF-8");
  IOUtils.copy(zi, osw);
  zi.closeEntry();
} finally {
  IOUtils.closeQuietly(osw);
  IOUtils.closeQuietly(base64is);
}
return new FileHolder(tmpFile.getAbsolutePath(), "UTF-8");
}
Conclusion
  • Use GZipOutput/InputStream and Bse64Output/InputStream to compress the large text. This can reduce size of text about 85%, this can reduce the time to transfer the request/response.
  • To reduce memory usage at client side:
  •   We use stream api(GSon stream or XML Stax) to read doc one by one.
  •   Define a custom Solr Field Type: FileTextField which accepts FileHolder as value. FileTextField will eventually pass a reader to Lucene. Lucene will use the reader to read content and add to index.
  •   When the text field is too big, first uncompress it to a temp file, create a FileHolder instance, then set the FileHolder instance as field value.

Java: Use ZIP Stream and Base64 Stream to Compress Large String Data


In the last postwe introduced how to use common codec library Base64 class: Base64.encodeBase64String and Base64.decodeBase64 and Zip Stream to compress large string data and encode it as Base64 string, then pass it via network to remote client; then decode it and uncompress to get the original string at remote side.

It works, but it has one drawback: it has to load the whole byte array or string into memory. If the string is too large, application may hit OutOfMemoryError.  
To solve this problem, we can use apache common codec Bse64OutputStream and Java GZipOutputStream to write base64 endode string; use Base64InputStream and GZipInputStream to decode to get the original string.
Apache Common Codec Base64InputStream and Base64OutputStream
The default behaviour of the Base64InputStream is to DECODE base64 string, whereas the default behaviour of the Base64OutputStream is to ENCODE, but this behaviour can be overridden by using a different constructor.
Stream Chaining Order
When zipped the string, the order would be GZIPOutputStream -> Base64OutputStream -> FileOutputStream.
First GZIPOutputStream compresses the string, Base64OutputStream converts it to base64 encoded string, FileOutputStream writes result to a file.


When unzip the base64 encoded string, the order would be GZIPInputStream -> Base64InputStream -> FileInputStream
FileInputStream reads from the file, Base64InputStream decode the base64-encoded string, GZIPInputStream then uncompress to get the original string.
Stream Chaining close
When closing chained streams, we should (and need only) close the outermost stream.
Compress String and Encode It
public static void zipAndEncode(File originalFile, File outFile) throws IOException {
    FileInputStream fis = null;
    GZIPOutputStream zos = null;
    try {
      fis = new FileInputStream(originalFile);
      FileOutputStream fos = new FileOutputStream(outFile);
      Base64OutputStream base64Out = new Base64OutputStream(fos);
      zos = new GZIPOutputStream(base64Out);
      IOUtils.copy(fis, zos);
    } finally {
      IOUtils.closeQuietly(fis);
      IOUtils.closeQuietly(zos);
    }
  }
Decode Base64 String and Uncompress It
public static void decodeAndUnzip(File inZippedFile, File outUnzippedFile) throws IOException {
    GZIPInputStream zis = null;
    OutputStreamWriter osw = null;
    try {
      FileInputStream fis = new FileInputStream(inZippedFile);
      Base64InputStream base64In = new Base64InputStream(fis);
      zis = new GZIPInputStream(base64In);
      
      FileOutputStream fos = new FileOutputStream(outUnzippedFile);
      osw = new OutputStreamWriter(fos, "UTF-8");
      IOUtils.copy(zis, osw);
    } finally {
      IOUtils.closeQuietly(zis);
      IOUtils.closeQuietly(osw);
    }
  }
Test Result:
Original file is about 131,328kb(131mb), base64 encode zipped string is 15,756kb(15mb).
We can see that the size reduces 88%. The benefit is huge and it's worth.
Resource
Commons Base64OutputStream - Principle of least surprise?

How To Encode A URL String in Java


Recently, I am using Apache Http Client and HttpURLConnection to send the following request to remote Solr server:
http://localhost:8080/solr/select?q=extractingat:[2012-11-14T04:08:54.000Z TO 2013-11-14T04:11:05.000Z]&start=0&rows=100
It got IllegalArgumentException like below:
java.lang.IllegalArgumentException: Illegal character in query at index 74: http://localhost:8080/solr/select?q=extractingat:[2012-11-14T04:08:54.000Z TO 2013-11-14T04:11:05.000Z]&start=0&rows=100
 at java.net.URI.create(URI.java:859)
 at org.apache.http.client.methods.HttpGet.<init>(HttpGet.java:69)
Caused by: java.net.URISyntaxException: Illegal character in query at index 74: http://localhost:8080/solr/select?q=extractingat:[2012-11-14T04:08:54.000Z TO 2013-11-14T04:11:05.000Z]&start=0&rows=100
 at java.net.URI$Parser.fail(URI.java:2829)
 at java.net.URI$Parser.checkChars(URI.java:3002)
 at java.net.URI$Parser.parseHierarchical(URI.java:3092)
The problem is because url special characters which should be encoded. Please read URL Encoding about what characters need to be encoded and why?

In Java, we can use URLEncoder to encode special characters. 
To use URLEncoder, we just need pay attention to one thing: which parts should be encoded.
Basic rule is that if these special characters are used for special use, then don't encode them.

In url: <scheme name> : <hierarchical part> [ ? <query> ] [ # <fragment> ], we usually need encode the <query> and <fragment> part. 

For the 2 formats of query string: 
Semicolon format: key1=value1;key2=value2;key3=value3
Ampersand format: key1=value1&key2=value2&key3=value3
We should not encode the ? =, & or ; which is used to separate multiple key value pairs. We should just encode the key and value field.

Resource
URL Encoding
URLEncoder Javadoc
URI scheme

Java: Use Zip Stream and Base64 Encoder to Compress Large String Data


Problem

In my project, we run some queries in Solr server, and return combined response back to client. But some text fields are too large, we would like to reduce their size.

Use GZIPOutputStream or ZipOutputStream?

To compress data, Java provides two streams GZIPOutputStream and ZipOutputStream. What’s the difference and which we should use? The compression algorithm and performance is almost same (Lempel-Ziv Welch), the difference is that GZIP format is used to compress a single file, and ZIP is a archive format: to compress many files in a single archive: using putNextEntry and closeEntry to add a new entry into the archive file.

In this case, we use GZIPOutputStream, because we don’t add and compress multiple file in a single archive, so no need to use ZipOutputStream, also the code to use ZipOutputStream si a little simpler.

Use GZIPOutputStream and Base64 Encoder to Compress String At server side, we can use GZIPOutputStream to compress a string to a byte array stored in ByteArrayOutputStream. But we can’t transfer the byte array as text in http response. We have to use a Base64 encoder to encode the byte array as Base64. We can use org.apache.commons.codec.binary.Base64.encodeBase64String(). Then we add the compressed text as a field in Solr Document field - not shown in the code below.

/**
   * At server side, use ZipOutputStream to zip text to byte array, then convert
   * byte array to base64 string, so it can be trasnfered via http request.
   */
  public static String compressString(String srcTxt)
      throws IOException {
    ByteArrayOutputStream rstBao = new ByteArrayOutputStream();
    GZIPOutputStream zos = new GZIPOutputStream(rstBao);
    zos.write(srcTxt.getBytes());
    IOUtils.closeQuietly(zos);

    byte[] bytes = rstBao.toByteArray();
    // In my solr project, I use org.apache.solr.co mmon.util.Base64.
    // return = org.apache.solr.common.util.Base64.byteArrayToBase64(bytes, 0,
    // bytes.length);
    return Base64.encodeBase64String(bytes);
  }

Test Data:

Original text in memory is about 134,479,520(134mb), its zipped byte array is about 9,001,240(9mb), base 64 string is 16,198,528(16mb). We can see that the size reduces 88%. This is huge and it’s worth. Use Base64 Decoder and GZIPInputStream to Uncompress String At remote client side, we first read the text response from stream, about how to read one Solr document using stream API, please read: Solr: Use STAX Parser to Read XML Response to Reduce Memory Usage Solr: Use SAX Parser to Read XML Response to Reduce Memory Usage Solr: Use JSON(GSon) Streaming to Reduce Memory Usage

Then use org.apache.commons.codec.binary.Base64.decodeBase64() to decode the Base64 string to byte array, and then use ZipInputStream to read the zipped byte array to get original unzipped string, then add it to Solr Document as a field.

/**
   * When client receives the zipped base64 string, it first decode base64
   * String to byte array, then use ZipInputStream to revert the byte array to a
   * string.
   */
  public static String uncompressString(String zippedBase64Str)
      throws IOException {
    String result = null;

    // In my solr project, I use org.apache.solr.common.util.Base64.
    // byte[] bytes =
    // org.apache.solr.common.util.Base64.base64ToByteArray(zippedBase64Str);
    byte[] bytes = Base64.decodeBase64(zippedBase64Str);
    GZIPInputStream zi = null;
    try {
      zi = new GZIPInputStream(new ByteArrayInputStream(bytes));
      result = IOUtils.toString(zi);
    } finally {
      IOUtils.closeQuietly(zi);
    }
    return result;
  }

Test Code

public static void main(String... args) throws IOException {
    String source = "-original-file-path;
    String zippedFile = "-base-64-zip-file-path-";
    FileInputStream fis = new FileInputStream(source);
    String srcTxt = IOUtils.toString(fis, "UTF-8");
    IOUtils.closeQuietly(fis);

    String str = compressString(srcTxt);
    FileWriter fw = new FileWriter(zippedFile);
    IOUtils.write(str, fw);
    IOUtils.closeQuietly(fw);

    fis = new FileInputStream(zippedFile);
    String zippedBase64Str = IOUtils.toString(fis, "UTF-8");
    IOUtils.closeQuietly(fis);

    String originalStr = uncompressString(zippedBase64Str);
    fw = new FileWriter("-revertedt-file-path");
    IOUtils.write(originalStr, fw);
    IOUtils.closeQuietly(fw);
  }

Resource

Display AdSense Ads Efficiently to Maximize Revenue and Conditionally Show Alternative Ads


As a blogger, we try to provide unique and quality content. At same time, we also try to make some money from it. The first choice is AdSense, as it does pay well in all ads network. 
As AdSense only allows 3 standard ad units and 3 link units in one page, one way to try to make more money is to always show maximal allowed ads.

We are writing programming related posts at lifelongprogrammer.blogspot.com.
We show 3 standard ad units: one at one at header,one inside the post body, and one at left-sidebar,
It's easy: just add 2 Adsense widget or HTML/JavaScript widget in Layout setting, and add the ad in appropriate place in the post.

But what about the homepage, archive page, label page? 
What We Got
Adsense shows at most 3 standard ad units in order of the ads in the page. In blogspot, the order is like below: center blog post, left side bar(if any), right side bar(if any).
<div class="columns-inner">
 <div class="column-center-outer"/>
 <div class="column-left-outer"/>
 <div class="column-right-outer"/>
</div>
Adsense will always fill the first ad at header, for the remaining 2 ads, Adsense fill 2 ads in the first 2 posts, and leave blank area in left side bar.

This is not we want:
1. We want to show ads "above the fold" - where user can read without scrolling. 
Ads "above the fold" will have a better click-through rate than ads below the fold.
Place ads near where visitors eyeballs naturally go to in order to get the best CTR (top left, near titles, near site navigation, etc).
Read more at 5 Simple Adsense Layout Examples for Increasing Click Through Rates
2. Blank space in left side bar makes blog look ugly and unprofessional.
What We Want
For homepage, archive page, label page, we want Adsense always shows the 3 standard ad units above the folder: the ad at header, the ad in the first post, the ad in left side bar.
Also for other posts, in the ad slot, I want to display alternative da, for example chitika: anyway, Less is better than Nothing.

Also in the first post, we want to display one link ad unit below title, one link ad at the end of post.

With the following change, we see immediate change in Adsense revenue.  
Solution
Edit Template
We edit the template: define one variable firstAds, it's true by default, at the end of post <data:post.body/>, change its value to false.
<script src='//pagead2.googlesyndication.com/pagead/js/adsbygoogle.js'/>
<!-- define firstAds = true -->
<script>var firstAds = true;</script>
</head>

<div class='post-header'>
 <div class='post-header-line-1'/>
 <script>
 <!-- display one link ad below title if it's first post -->
 // <![CDATA[
 if(firstAds){
 document.write("<br><ins class='adsbygoogle' ></ins><script>(adsbygoogle = window.adsbygoogle || []).push({});<\/script>");
 }
 // ]]>
 </script>
</div>
<div class='post-body entry-content' expr:id='"post-body-" + data:post.id' itemprop='articleBody'>
 <data:post.body/>
 <!-- display one link ad after the first post, and set firstAds to false -->
 <script>
 // <![CDATA[
 if(firstAds){
 document.write("<br><ins class='adsbygoogle' ></ins><script>(adsbygoogle = window.adsbygoogle || []).push({});<\/script>");
 firstAds=false; // change its value to false
 }
 // ]]>
 </script>
Use firstAds to Conditionally Display Ad in Post
When add ad in post, we will use the firstAds variable: if it's true(means this is the first post in homepage, archive page or label page), display Adsense ad, otherwise display alternative ads.
<script>
//<![CDATA[ 
if(firstAds){
document.write("<ins class='adsbygoogle' ...></ins><script>(adsbygoogle = window.adsbygoogle || []).push({});<\/script>")
}  else {
 // javasctipt to show alternative ads.
}
//]]>  
</script>
Resources
How To Insert Adsense Ads In Blogger Post
7 Blogger Page Types: Analysis, Code Snippets, Data Matrix
5 Simple Adsense Layout Examples for Increasing Click Through Rates

Use Json(Gson) to Serialize/Deserialize Collection of Object to Java Properties File


It's common and easy to store application configuration information like key=value in java properties file.
But how to store object or collection of object type into java properties file.

For example, I want to store a list of failed to fetch items in java properties file: including item id, failed times, and some other properties of this item, then I need deserialize it to list of UnableToFetchItem.
unableToFetchList={"id_1"\:{"id"\:"id_1, "extracting"\:a_long_millseconds,"failedTimes"\:a_int}, "id_x":{}, }

If we have to write our own code, it gets troublesome: when serialize the item lists, we have to put delimiter to separate items, serialize key/value pairs; when ddeserialize it back, we have to parse the delimiter, construct item object and set field value.

As my application already uses the java Json library:Gson, so I decide to serialize the list of UnableToFetchItem as a Json string, and then deserialize the Json string to a list of UnableToFetchItem. The code is much simpler and easy to read and maintain.
Implementation
The code is simple, the only trick part is how to serialize and deserialize a list of object: 
Look at GSon User Guide: Collections Examples
The key is to define a TypeToken with correct parameterized type:
private static Type objType = new TypeToken<Map<String,UnableToFetch>>() {}.getType();
import java.lang.reflect.Type;
import com.google.gson.Gson;
import com.google.gson.reflect.TypeToken;

public class AppConfig {
  private Map<String,UnableToFetch> unableToFetchMap = new HashMap<String,UnableToFetch>();
  private static Type objType = new TypeToken<Map<String,UnableToFetch>>() {}
      .getType();
  
  public static Map<String,UnableToFetch> fromJson(String jsonStr) {
    Map<String,UnableToFetch> result = null;
    if (StringUtils.isNotBlank(jsonStr)) {
      result = new Gson().fromJson(jsonStr, objType);
    }
    if (result == null) {
      result = new HashMap<String,UnableToFetch>();
    }
    return result;
  }
  
  public static String toJson(Map<String,UnableToFetch> map) {
    return new Gson().toJson(map, objType);
  }
  
  public AppConfig readFromPropertiesFile() {
    File proppertiesFile = new File("PROPERTIES_FILE_NAME");
    if (!proppertiesFile.exists()) {
      return this;
    }
    Properties properties = new Properties();
    FileInputStream fis = null;
    try {
      fis = new FileInputStream(proppertiesFile);
      properties.load(fis);
      String str = properties.getProperty("unableToFetchList");
      unableToFetchMap = fromJson(str);
    } catch (Exception e) {
      throw new RuntimeException(e);
    } finally {
      IOUtils.closeQuietly(fis);
    }
    return this;
  }

  public void saveProperties() {
    File proppertiesFile = new File("PROPERTIES_FILE_NAME");
    
    Properties properties = new Properties();
    FileOutputStream fos = null;
    try {
      fos = new FileOutputStream(proppertiesFile);
      if (unableToFetchMap != null) {
        properties.setProperty("unableToFetchList", toJson(unableToFetchMap));
      }
      properties.store(fos, null);
    } catch (Exception e) {
      throw new RuntimeException(e);
    } finally {
      IOUtils.closeQuietly(fos);
    }
  }
  
  public static void main(String[] args) {
    Map<String,UnableToFetch> unableToFetchMap = new HashMap<String,UnableToFetch>();
    
    unableToFetchMap.put("id1", new UnableToFetch("id1", 1, 11111));
    String jsonStr = AppConfig.toJson(unableToFetchMap);
    System.out.println(jsonStr);
    
    unableToFetchMap = AppConfig.fromJson(jsonStr);
    jsonStr = AppConfig.toJson(unableToFetchMap);
    System.out.println(jsonStr);
  }
  
  public static class UnableToFetch {
    private String id;
    private long along;
    private int failedTimes;
    public UnableToFetch() {}
    public String toString() {
      return ToStringBuilder.reflectionToString(this);
    }
  }
}

Analyze and Fix One Java System.exit Hang Problem


My Solr application is designed to be running at user's laptop, it will fetch content from remote Solr server via proxy application. To reduce the impact to user, we limit the max memory size to 512mb: -Xmx512.
We take several approaches to reduce memory usage: for example use JSON(GSon) Streaming to read Solr document one by one from response.
Please see more details at:
Solr: Use JSON(GSon) Streaming to Reduce Memory Usage
Solr: Use STAX Parser to Read XML Response to Reduce Memory Usage
Solr: Use SAX Parser to Read XML Response to Reduce Memory Usage

But the Solr application may still throw OutOfMemoryError, when this happens, I have to kill the embedded jetty server, another application will detect the jetty server is down (by sending http request to /solr/admin/ping) and will restart it.

But I found that at some point, there are 2 embedded jetty application running: the first one hit OutOfMemoryError, and tried to kill itself, but hang, another was starting but failed with followingerror:
org.apache.solr.common.SolrException: 
Index locked for write for core collection1.
EVERE: Unable to create core: collection1
org.apache.solr.common.SolrException: Index locked for write for core collection1
 at org.apache.solr.core.SolrCore.$lt;init$gt;(SolrCore.java:806)
 at org.apache.solr.core.SolrCore.$lt;init$gt;(SolrCore.java:619)
Caused by: org.apache.lucene.store.LockObtainFailedException: Index locked for write for core collection1
 at org.apache.solr.core.SolrCore.initIndex(SolrCore.java:484)
 at org.apache.solr.core.SolrCore.$lt;init$gt;(SolrCore.java:730)
This is because the first jetty application didn't actually be killed, so it's still holding the lock to write.lock file. This problem can be fixed by fixing the first issue: why the first jetty application didn't kill itself.
Why System.exit hang?
I reproduced the problem, and get the heap dump and use IBM Java Thread and Dump Analyzer to analyze it.
Quick Conclusion
Don't call System.ext in ServletContextListener.contextDestroyed.

MyTaskThread
This happens first: when the application code detects OutOfMemoryError, it will do some clean up jobs and then call System.exit to kill itself. System.exit will call Shutdown.runHooks, which will run all ShutdownHook threads - added by Runtime.addShutdownHook(Thread hook).
Now it owns the lock of Shutdown instance and wait for jetty ShutdownThread finish.
MyTaskThread State: in Object.wait()
Monitor: Owns Monitor Lock on 0x00000000f0051870
at java.lang.Object.wait(Native Method) 
- waiting on [0x00000000f00a7000] (a org.eclipse.jetty.util.thread.ShutdownThread) 
at java.lang.Thread.join(Unknown Source) 
- locked [0x00000000f00a7000] (a org.eclipse.jetty.util.thread.ShutdownThread) 
at java.lang.Thread.join(Unknown Source) 
at java.lang.ApplicationShutdownHooks.runHooks(Unknown Source) 
at java.lang.ApplicationShutdownHooks$1.run(Unknown Source) 
at java.lang.Shutdown.runHooks(Unknown Source) 
at java.lang.Shutdown.sequence(Unknown Source) 
at java.lang.Shutdown.exit(Unknown Source) 
- locked [0x00000000f0051870] (a java.lang.Class for java.lang.Shutdown) 
at java.lang.Runtime.exit(Unknown Source) 
at java.lang.System.exit(Unknown Source) 
at MyAppUtilUtil.hardCommitLocalCache(MyAppUtilUtil.java:152) 
at MyAppUtilUtil.synchronizeP1Index(MyAppUtilUtil.java:372) 
at MyAppUtil$MyAppUtilUtilTask$MyTaskThread.doRun(MyAppUtilUtil.java:352) 
at MyAppUtil$MyAppUtilUtilTask$MyTaskThread.run(MyAppUtilUtil.java:324
Look at jetty ShutdownThread, it's a ShutdownHook added by jetty.server.Server. It calls MyServletContextListener.contextDestroyed, which will call same method MyAppUtil.shutdown to do clean up which will call System.exit.
Now the cause of the hang is obvious.
ShutdownThread tries to call Shutdown.exit, and tries to get the lock of Shutdown instance to enter synchronized (lock) block. But it can't as the lock is already owned by MyTaskThread thread, so ShutdownThread will wait for ever.

MyTaskThread thread owns the lock of Shutdown instance and wait for jetty ShutdownThread finish. It will also wait for ever. This cause the System.ext hang.
ShutdownThread
Look at jetty ShutdownThread, it's a ShutdownHook added by jetty.server.Server. It calls MyServletContextListener.contextDestroyed, which will call same method MyAppUtil.shutdown to do clean up which will call System.exit.
Now the cause of the hang is obvious.
ShutdownThread tries to call Shutdown.exit, and tries to get the lock of Shutdown instance to enter synchronized (lock) block. But it can't as the lock is already owned by MyTaskThread thread, so ShutdownThread will wait for ever.

MyTaskThread thread owns the lock of Shutdown instance and wait for jetty ShutdownThread finish. It will also wait for ever. This cause the System.ext hang.

State: Waiting on monitor
Monitor: Waiting for Monitor Lock on 0x00000000f0051870
at java.lang.Shutdown.exit(Unknown Source) 
- waiting to lock [0x00000000f0051870] (a java.lang.Class for java.lang.Shutdown) 
at java.lang.Runtime.exit(Unknown Source) 
at java.lang.System.exit(Unknown Source) 
at MyAppUtilUtil.hardCommitLocalCache(MyAppUtilUtil.java:152) 
at MyAppUtil$MyAppUtilUtilTask.shutdown(MyAppUtilUtil.java:126) 
at MyAppUtil.shutdown(MyAppUtilUtil.java:64) 
- locked [0x00000000f04cbfc0] (a MyAppUtil) 
at MyServletContextListener.contextDestroyed(MyServletContextListener.java:18) 
at org.eclipse.jetty.server.handler.ContextHandler.doStop(ContextHandler.java:813) 
at org.eclipse.jetty.servlet.ServletContextHandler.doStop(ServletContextHandler.java:158) 
at org.eclipse.jetty.webapp.WebAppContext.doStop(WebAppContext.java:504) 
at org.eclipse.jetty.util.component.AbstractLifeCycle.stop(AbstractLifeCycle.java:89) 
- locked [0x00000000f010ce40] (a java.lang.Object) 
at org.eclipse.jetty.server.handler.HandlerCollection.doStop(HandlerCollection.java:250) 
at org.eclipse.jetty.util.component.AbstractLifeCycle.stop(AbstractLifeCycle.java:89) 
- locked [0x00000000f00430c8] (a java.lang.Object) 
at org.eclipse.jetty.server.handler.HandlerWrapper.doStop(HandlerWrapper.java:107) 
at org.eclipse.jetty.server.Server.doStop(Server.java:338) 
at org.eclipse.jetty.util.component.AbstractLifeCycle.stop(AbstractLifeCycle.java:89) 
- locked [0x00000000f00404d8] (a java.lang.Object) 
at org.eclipse.jetty.util.thread.ShutdownThread.run(ShutdownThread.java:131)
To fix the problem, In MyServletContextListener.contextDestroyed, I do all nedded cleanup but don't call System.exit.

Resource

Labels

adsense (5) Algorithm (69) Algorithm Series (35) Android (7) ANT (6) bat (8) Big Data (7) Blogger (14) Bugs (6) Cache (5) Chrome (19) Code Example (29) Code Quality (7) Coding Skills (5) Database (7) Debug (16) Design (5) Dev Tips (63) Eclipse (32) Git (5) Google (33) Guava (7) How to (9) Http Client (8) IDE (7) Interview (88) J2EE (13) J2SE (49) Java (186) JavaScript (27) JSON (7) Learning code (9) Lesson Learned (6) Linux (26) Lucene-Solr (112) Mac (10) Maven (8) Network (9) Nutch2 (18) Performance (9) PowerShell (11) Problem Solving (11) Programmer Skills (6) regex (5) Scala (6) Security (9) Soft Skills (38) Spring (22) System Design (11) Testing (7) Text Mining (14) Tips (17) Tools (24) Troubleshooting (29) UIMA (9) Web Development (19) Windows (21) xml (5)