Extending FindBugs: Creating Our Detector Plugin to Find Unclosed SolrQueryRequest


Recently, we found a bug in our Solr related project. Basically the problem is that:
1. we create a SolrQueryRequest but forget to close it when it is no longer needed.
2. We call SolrCore.getSearcher(), but forgot to decref the RefCounted.
This will cause one SolrIndexSearcher(thus its various kinds of cache) left unclosed every time we run a commit, at last will cause OutOfMemoryError.
RefCounted refCounted = req.getCore().getSearcher();
Solr RefCounted: Don't forget to close SolrQueryRequest or decref solrCore.getSearcher
After realized this bug, we searched our code to fix all this kind of bugs. But how we can prevent this same kind of error happens again in the future.

This make me to think to write a findbugs plugin to detect and report this kind of bug. The following is how to do this.
Test-Driven: How to test our findbugs detector

When we learn code or write code that we are not familiar, the best way is to write code, test it, make change and test it again.
So it's important to know how to write test code to test our detector.

Fortunately, there is project: test-driven-detectors4findbugs in github which lets us write test code for our detector easily.
Test Driven Detectors 4 FindBugsIt is for FindBugs 1.3.9, we need make some small change to make it work with FindBugs 2.x. I forked the project and make it work with FindBugs 2.x, and put it at Github.

The following is the test code for NotClosedRequestDetector:


public class NotClosedRequestDetectorTester {
 @Test public void demoDecrefRefCountedSearcher() throws Exception {
  BugReporter bugReporter = DetectorAssert.bugReporterForTesting();
  NotClosedRequestDetector detector = new NotClosedRequestDetector(
    bugReporter);

  DetectorAssert.assertNoBugsReported(DemoClosedSolrRequest.class,
    detector, bugReporter);
 }
 @Test public void demoUnClosedSolrRequest() throws Exception {
  BugReporter bugReporter = DetectorAssert.bugReporterForTesting();
  NotClosedRequestDetector detector = new NotClosedRequestDetector(
    bugReporter);

  DetectorAssert.assertBugReported(DemoUnClosedSolrRequest.class,
    detector, bugReporter);
  printBugReporter(detector, bugReporter);

  assertFindbug(bugReporter,
    NotClosedRequestDetector.NOT_CLOSE_SOLR_REQUEST);
 }
}
Learn how to write custom detector
The best way is to learn from source code of findbugs(and its findbugsTestCases) and fb-contrib projects.
Check where we forgot to close SolrQueryRequest

The idea is simple, if a method initialize a SolrQueryRequest, we increment solrQueryInitCount by one, if it call SolrQueryRequest.close(), we decrease solrQueryInitCount by one.
If solrQueryInitCount is not zero at the end of method, we report a bug.

This implementation is not perfect, but anyway it works. In future we can improve it, also we can add one more detector to make sure the SolrQueryRequest.close and refCounted.decref() is called in a finally block.
The code is like below, the complete source/test code, findbugs.xml and messages.xml can be found at Github.

public class NotClosedRequestDetector extends BytecodeScanningDetector {
 public static final String NOT_CLOSE_SOLR_REQUEST = "NOT_CLOSE_SOLR_REQUEST";
 private int solrQueryInitCount = 0;
 private List<Integer> solrQueryInitPCList = new ArrayList<Integer>();

 private final BugReporter bugReporter;
 private final BugAccumulator bugAccumulator;
 public NotClosedRequestDetector(BugReporter bugReporter) {
  this.bugReporter = bugReporter;
  bugAccumulator = new BugAccumulator(bugReporter);
 }
 @Override public void visit(Code obj) {
  solrQueryInitCount = 0;
  solrQueryInitPCList.clear();
  super.visit(obj);
  if (solrQueryInitCount > 0) {
   for (Integer pc : solrQueryInitPCList) {
    bugAccumulator.accumulateBug(new BugInstance(this,
      NOT_CLOSE_SOLR_REQUEST, HIGH_PRIORITY)
      .addClassAndMethod(this), SourceLineAnnotation
      .fromVisitedInstruction(getClassContext(), this, pc));
   }
   bugAccumulator.reportAccumulatedBugs();
  }
 }
 @Override public void sawOpcode(int seen) {
  try {
   if (seen == INVOKESPECIAL) {
    boolean isSolrRequest = isSolrRequest();
    if (isSolrRequest) {
     if (getNameConstantOperand().equals("<init>")) {
      solrQueryInitCount++;
      solrQueryInitPCList.add(getPC());
     }
    }
   } else if (seen == INVOKEVIRTUAL || seen == INVOKEINTERFACE
     || seen == INVOKEINTERFACE_QUICK) {
    boolean isSolrRequest = isSolrRequest();

    if (isSolrRequest) {
     if (getNameConstantOperand().equals("close")
       && getSigConstantOperand().equals("()V")) {
      solrQueryInitCount--;
     }
    }
   }
  } catch (ClassNotFoundException cnfe) {
   bugReporter.reportMissingClass(cnfe);
  }
 }
 private boolean isSolrRequest() throws ClassNotFoundException {
  String className = getClassConstantOperand();
  if (className == null)
   return false;
  JavaClass classClass = Repository.lookupClass(className);
  JavaClass requestClass = Repository
    .lookupClass("org.apache.solr.request.SolrQueryRequest");
  return classClass.instanceOf(requestClass);
 }
}
Check where we forgot to decref RefCounted
Similarly, I create another class UnDecrefRefCountedSearcherDetector to detect where we forget to decref RefCounted, source code can be found at Github.
How to deploy it

We can directly put the built jar into eclipse\plugins\edu.umd.cs.findbugs.plugin.eclipse_***\plugin, or add the jar in Preferences -> FindBugs -> "Plugins and misc. Settings" tab.
It works

The following picture shows how it looks like after deploy my findbugs plugin.
Resources: 

Use Eclipse Findbugs to Improve Code Quality


FindBugs is a great tool to find the potential bug in our code. We can use it in Eclipse, Netbeans, IntelliJ or Hudson, or as a maven task: mvn findbugs:findbugs.

Customization FindBugs Eclipse Plugin
After install findbugs from http://findbugs.cs.umd.edu/eclipse/, we can change its settings in workspace level or project level.

In Report configuration tab, we can make it run automatically, set analysis effort as maximal, enable findbugs cloud, select specific bug categories.

In Plugins and misc. setting tab, we can run findbugs as extra job and cache class files, and we can add findbugs plugins.
One great findbugs plugin we should add: fb-contrib.
In Detector configuration tab, we can enable or disable some detectors.
Bug Group Configuration
When we try to fix potential bugs findbugs reports, we can use findbugs's "Bug Explorer" view, in this view, we can configure how to group bugs. 

Two useful configuration:
1. First sort by package, then by class, then by pattern.
Useful when we try to view and fix bugs in one package or class.
2. First sort by pattern, by package, then by class.
Useful when we try to fix same kind of potential bugs.
When we click potential bug reported by findbugs, in "Bug Info" view, we can read the details: why it is considered as a bug and how to fix it.
How Findbugs Looks Like

findbugs:gui, findbugs:gui, findbugs:check  

Reporting in maven 3:

Solr: Cache Responses of Slow Queries


I extended Solr stats to support stats.query before, but found out that it is quite slow when run stats query and stats.facet against 50 million data. 
Solr: Extend StatsComponent to Support stats.query, stats.query and facet.topn

Query like below would usually take more than 2 minutes, it would be much slower when run distributed stats queries.
http://localhost:8080/solr/select?q=*:*&rows=0&stats=true&stats.pagination=true&stats.field=size&f.size.stats.query=(size:[0 TO 1024))&f.size.stats.query=(size:[1024 TO 102400])&stats.facet=type&stats.facet.type.limit=5&stats.facet.ext_name.offset=0

So I am thinking to cache the response of stats query, so client can get response immediately.
The basic idea is that:
1.when we make a request, we can add one parameter:
cache=true/false – means whether we want to use cache for this query and whether we want to save the response of this query into cache.

For the first query, Solr will run the query and put results into cache, later for same query Solr will return response directly from cache.
This will not save the query, means Solr will not rebuild cache after restart.

2.Use cachehandler to manually add some queries into cache. If parameter persist is true, these queries will be saved into a file, Solr will rerun these queries and save response into cache after restart.
Example:
http://localhost:8080/solr/cachehandler?action=add&persist=true&cachequery=q=*:*%26rows=0%26facet=true%26facet.field=type
We can add multiple queries: cachequery=query1,query2.
Need escape special character in the query: convert & to %26.

After add some queries, run http://localhost:8080/solr/cachehandler?action=refill&sync=true/false, this will run the queries synchronously or asynchronously, and push results into cache.

We can use http://localhost:8080/solr/cachehandler?action=remove&cachequery=q=*:*%26rows=0%26facet=true%26facet.field=type to remove queries form cache and remove them from the property file.

These values will be automatically rebuilt when there is change in Solr server, and user commits the change to solr server.
Implementation Code
The code is like below: you can review the complete code at Github.
QueryResultLRUCache
This class extends LRUCache, the key is a customize <String, String[]>hashmap: CacheKeyHashMap, value would be like: q=query, fq=fq1,fq2, rows=rows.

We will change the cachequery string from cachehandler into CacheKeyHashMap(also add defaults, appends and invariants parameters defined in solrconfig.xml for the hanlder), and put it into cache. 
public class QueryResultLRUCache<K, V> extends
  LRUCache<CacheKeyHashMap, NamedList<Object>> {
 private String description = "My LRU Cache";
 private static final String PROPERTY_FILE = "querycache.properties";
 private static final String PRO_QUERIES = "queries";

 private final Set<String> cachedQueries = new HashSet<String>();
 private SolrCore core;

 public void init(int size, int initialSize, SolrCore core) {
  Map<String, String> args = new HashMap<String, String>();
  args.put("size", String.valueOf(size));
  args.put("initialSize", String.valueOf(initialSize));
  super.init(args, null, regenerator);
  this.core = core;
  cachedQueries.addAll(readCachedQueriesProperty(core));
  for (String query : cachedQueries) {
   CacheKeyHashMap key = convertQueryStringToParams(query, null);
   put(key, null);
   asyncRefill();
  }
 }

 private Set<String> readCachedQueriesProperty(SolrCore core) {
  Set<String> queries = new LinkedHashSet<String>();
  File propertyFile = new File(getPropertyFilePath(core));
  if (propertyFile.exists()) {
   InputStream is = null;
   try {
    is = new FileInputStream(propertyFile);
    Properties properties = new Properties();
    properties.load(is);
    String queriesStr = properties.getProperty(PRO_QUERIES);
    if (queriesStr != null) {
     String[] queriesArray = queriesStr.split(",");
     for (String query : queriesArray) {
      queries.add(query);
     }
    }

   } catch (Exception e) {
    logger.error("Exception happened when read " + propertyFile, e);
   } finally {
    if (is != null) {
     try {
      is.close();
     } catch (IOException e) {
      logger.error("Exception happened when close "
        + propertyFile, e);
     }
    }
   }
  }

  return queries;
 }

 private void saveCachedQueries() {
  if (!cachedQueries.isEmpty()) {
   File propertyFile = new File(getPropertyFilePath(core));
   OutputStream out = null;
   try {
    out = new FileOutputStream(propertyFile);
    Properties properties = new Properties();
    StringBuilder queries = new StringBuilder(
      16 * cachedQueries.size());
    Iterator<String> it = cachedQueries.iterator();
    while (it.hasNext()) {
     queries.append(it.next());
     if (it.hasNext()) {
      queries.append(",");
     }
    }
    properties.setProperty(PRO_QUERIES, queries.toString());
    properties.store(out, null);
   } catch (Exception e) {
    logger.error("Exception happened when save " + propertyFile, e);
   } finally {
    if (out != null) {
     try {
      out.close();
     } catch (IOException e) {
      logger.error("Exception happened when close "
        + propertyFile, e);
     }
    }
   }
  }
 }

 /*
  * Save cachedQueries to property File
  */
 public void close() {
  saveCachedQueries();
 }

 private static final String getPropertyFilePath(SolrCore core) {
  return core.getDataDir() + File.separator + PROPERTY_FILE;
 }
 public NamedList<Object> remove(String query) {
  CacheKeyHashMap params = convertQueryStringToParams(query, null);
  synchronized (cachedQueries) {
   cachedQueries.remove(query);
  }
  synchronized (map) {
   return remove(params);
  }
 }

 public NamedList<Object> remove(CacheKeyHashMap key) {
  synchronized (map) {
   return map.remove(key);
  }
 }
 public NamedList<Object> put(String query, NamedList<Object> value,
   boolean persist) {
  return put(query, value, persist, true, null);
 }
 public NamedList<Object> put(String query, NamedList<Object> value,
   boolean persist, boolean addDefault) {
  return put(query, value, persist, addDefault, null);
 }
 public NamedList<Object> put(String query, NamedList<Object> value,
   boolean persist, boolean addDefault, String handlerName) {
  CacheKeyHashMap key = convertQueryStringToParams(query, handlerName);
  if (persist) {
   synchronized (cachedQueries) {
    cachedQueries.add(query);
   }
  }
  return put(key, value);
 }

 @Override
 public NamedList<Object> put(CacheKeyHashMap key, NamedList<Object> value) {
  if (value != null) {
   value.remove("CachedAt");
   value.add("CachedAt",
     DateUtil.getThreadLocalDateFormat().format(new Date()));
  }
  return super.put(key, value);
 }
 public void asyncRefill() {
  refill(false);
 }
 public void refill(boolean sync) {
  if (sync) {
   refillImpl();
  } else {
   new Thread(new Runnable() {
    @Override
    public void run() {
     refillImpl();
    }
   }).start();
  }
 }
 @SuppressWarnings("unchecked")
 private void refillImpl() {
  synchronized (map) {
   SolrQueryRequest myreq = null;
   try {
    Iterator<CacheKeyHashMap> it = map.keySet().iterator();
    SolrRequestHandler searchHandler = core
      .getRequestHandler("/select");

    Map<CacheKeyHashMap, NamedList<Object>> newValue = new HashMap<CacheKeyHashMap, NamedList<Object>>();
    myreq = new LocalSolrQueryRequest(core,
      new ModifiableSolrParams());
    while (it.hasNext()) {
     CacheKeyHashMap query = it.next();
     SolrQueryResponse rsp = new SolrQueryResponse();
     searchHandler.handleRequest(myreq, rsp);
     MultiMapSolrParams params = new MultiMapSolrParams(query);
     myreq.setParams(params);
     newValue.put(query, rsp.getValues());
    }
    map.putAll(newValue);
   } finally {
    if (myreq != null) {
     myreq.close();
    }
   }
  }
 }
 public void clearValues() {
  synchronized (map) {
   Iterator<Map.Entry<CacheKeyHashMap, NamedList<Object>>> it = map
     .entrySet().iterator();
   while (it.hasNext()) {
    Map.Entry<CacheKeyHashMap, NamedList<Object>> entry = it.next();
    entry.setValue(null);
   }
  }
 }
 public static CacheKeyHashMap getKey(SolrQueryRequest req) {
  ModifiableSolrParams modifiableParams = new ModifiableSolrParams(
    req.getParams());
  modifiableParams.remove("cache");
  modifiableParams.remove("refresh");
  return paramsToHashMap(modifiableParams);
 }
 @Override
 public void warm(SolrIndexSearcher searcher,
   SolrCache<CacheKeyHashMap, NamedList<Object>> old) {
  throw new UnsupportedOperationException();
 }
 private static CacheKeyHashMap paramsToHashMap(
   ModifiableSolrParams modifiableParams) {
  CacheKeyHashMap map = new CacheKeyHashMap();
  map.putAll(SolrParams.toMultiMap(modifiableParams.toNamedList()));
  return map;
 }

 @SuppressWarnings("rawtypes")
 private CacheKeyHashMap convertQueryStringToParams(String query,
   String handlerName) {
  ModifiableSolrParams modifiableParams = new ModifiableSolrParams();
  if (handlerName == null) {
   handlerName = "/select";
  }
  RequestHandlerBase handler = (RequestHandlerBase) core
    .getRequestHandler(handlerName);
  NamedList initArgs = handler.getInitArgs();
  if (initArgs != null) {
   Object o = initArgs.get("defaults");
   if (o != null && o instanceof NamedList) {
    modifiableParams.add(SolrParams.toSolrParams((NamedList) o));
   }
   o = initArgs.get("appends");
   if (o != null && o instanceof NamedList) {
    modifiableParams.add(SolrParams.toSolrParams((NamedList) o));
   }
   o = initArgs.get("invariants");
   if (o != null && o instanceof NamedList) {
    modifiableParams.add(SolrParams.toSolrParams((NamedList) o));
   }
  }
  modifiableParams.add(SolrRequestParsers.parseQueryString(query));
  return paramsToHashMap(modifiableParams);
 }
}
Use QueryResultLRUCache in SearchHandler
In handleRequestBody, if request set cache=true, if will then first try to get response from cache, if exists, return it directly, otherwise, it will run the query and put the response into cache.
public void handleRequestBody(SolrQueryRequest req, SolrQueryResponse rsp, 
List<SearchComponent> components, ResponseBuilder rb) throws Exception
  {
   QueryResultLRUCache<NamedList<Object>,NamedList<Object>> queryResultCache = req.getCore().getQuerycaCache();
   SolrParams reqParams = req.getParams();
   boolean cacheRst = reqParams.getBool("cache", false);
   if(cacheRst)
   {
      boolean refresh = reqParams.getBool("refresh", false);
      if(!refresh)
      {
           NamedList<Object> cacheNL =queryResultCache.get(QueryResultLRUCache.getKey(req));
           if(cacheNL!=null)
           {
             NamedList<Object> responseHeader =rsp.getResponseHeader();
             responseHeader.add("UseCache", "true");
             NamedList<Object> rstNL= cacheNL.clone();
             rsp.getValues().addAll(rstNL);
             return;
           }
      }
   }
   ....
    if(cacheRst)
    {
      queryResultCache.put(QueryResultLRUCache.getKey(req), rsp.getValues()) ;
    }
  } 
Rebuild the cache asynchronously after commit in RunUpdateProcessorFactory
public void processCommit(CommitUpdateCommand cmd) throws IOException {
    updateHandler.commit(cmd);
    super.processCommit(cmd);
    changesSinceCommit = false;
    
    QueryResultLRUCache<NamedList<Object>,NamedList<Object>> querycaCache = req
        .getCore().getQuerycaCache();
    querycaCache.clearValues();
    querycaCache.asyncRefill();
  }
QueryResultCacheHandler
This class is simple, the code can be view from Github.

Solr RefCounted: Don't forget to close SolrQueryRequest or decref solrCore.getSearcher


How Solr uses RefCounted
RefCounted is an important concept in Solr, it keeps track of a reference count on a resource and close it when the count hits zero.
For example, a Solr core reuses SolrIndexSearcher, it uses RefCounted to keep track of its count and close it when the count hits zero. 

when Solr initializes a Solr core, it will create one SolrIndexSearcher, and put it into searcherList, but keep 2 reference to this searcher: one is in the searcherList(_realtimeSearchers or _searchers), one is the variable realtimeSearcher.
org.apache.solr.core.SolrCore.openNewSearcher(boolean, boolean)
      RefCounted<SolrIndexSearcher> newSearcher = newHolder(tmp, searcherList);    // refcount now at 1
      // Increment reference again for "realtimeSearcher" variable.  It should be at 2 after.
      // When it's decremented by both the caller of this method, and by realtimeSearcher being replaced,
      // it will be closed.
      newSearcher.incref();
   realtimeSearcher = newSearcher;
   searcherList.add(realtimeSearcher);
Solr core always keeps 2 reference to the SolrIndexSearcher instance until the solr core is closed/ unloaded which will call SolrCore.closeSearcher(), this will decrease the count to 0, then SolrCore will release related resource, then remove its reference variable RefCounted from its searchList, thus there will be no pointer to the searcher, GC will reclaim it.
Code: SolrCore.newHolder(SolrIndexSearcher, List>)

When we send a request to Solr for a request handler, SolrDispatchFilter will create a SolrQueryRequest which has a RefCounted searcherHolder. In Solr request handler, it can call SolrQueryRequest.getSearcher() to get SolrIndexSearcher, this will increase the reference count to SolrIndexSearcher. At the end of the request, SolrDispatchFilter will call SolrQueryRequest.close() which will decrease the reference count.
How we should write our own code
In our own code, if we create a SoelrQueryRequest, then we have to call close after we don't need it.
If we call req.getCore().getSearcher() - this will increase the reference count, after we are done with the searcher, we have to call decref to decrease the reference count.

Otherwise it will cause memory leak, as the reference count of this searcher would never be zero, thus will not be close, and caches such as filterCache, queryResultCache, documentCache, fieldValueCache will not be cleaned, it will be kept in the
Code example
To demonstrate, I write a simple request handler which create a SolrRequestHandler to run facet query. But we forgot to close the SolrRequestHandler.


public class DemoUnClosedSolrRequest extends RequestHandlerBase {
  public void handleRequestBody(SolrQueryRequest req, SolrQueryResponse rsp)
      throws Exception {
    SolrCore core = req.getCore();
    SolrQuery query = new SolrQuery();
    query.setQuery("datatype:4").setFacet(true).addFacetField("szkb")
        .setFacetMinCount(2);
    SolrQueryRequest facetReq = new LocalSolrQueryRequest(core, query);
    try {
      SolrRequestHandler handler = core.getRequestHandler("/select");
      handler.handleRequest(facetReq, new SolrQueryResponse());
    } finally {
   // Don't forget to close SolrQueryRequest
      //facetReq.close();
    }
  }
}
Test Code

public void unclosedSearcher() throws Exception {
    long startTime = System.currentTimeMillis();
    int i = 0;
    try {
      for (; i < 100000; i++) {
        // in the server, this request handler will increase the reference
        // count, but forget to decrease it.
        HttpSolrServer server = new HttpSolrServer(
            "http://mailsearchsolr:7766/solr");
        AbstractUpdateRequest request = new UpdateRequest("/demo1");
        server.request(request);
        
        // Nomally commit will close the old search, and open a new searcher,
        // but in this case because the reference of SolrIndexSearcher is not 0, 
        // so the old search will not be cleaned.
        request = new UpdateRequest("/update");
        request.setParam("commit", "true");
        server.request(request);
      }
    } catch (Exception e) {
      e.printStackTrace();
      throw e;
    } finally {
      System.out.println("run " + i + " times");
      System.out.println("Took " + (System.currentTimeMillis() - startTime)
          + " mills");
    }
  }
If we run this DemoUnClosedSolrRequest, then run a commit, in normal case, this will close the old searcher, and open a new searcher. But in this case, as the reference count in the old searcher is not zero, so the old searcher will not be closed, and it will be kept in memory forever - including its caches, as noone will remove it from SolrCore.searchList. Thus GC can't reclaim it.

Test Result

After run 6817 times, 58 minutes, client will fail.

org.apache.solr.common.SolrException: Server at http://host:port/solr returned non ok status:500, message:Server Error
run 6817 times
Took 3513530 mills

In the server, it throws OutOfMemoryError:
SEVERE: null:java.lang.RuntimeException: java.lang.OutOfMemoryError: Java heap space
        at org.apache.solr.servlet.SolrDispatchFilter.sendError(SolrDispatchFilter.java:733)
        at java.lang.Thread.run(Unknown Source)
Caused by: java.lang.OutOfMemoryError: Java heap space
        at org.apache.lucene.util.OpenBitSet.(OpenBitSet.java:88)
        at org.apache.solr.search.DocSetDelegateCollector.collect(DocSetDelegateCollector.java:56)
        at org.apache.lucene.search.Scorer.score(Scorer.java:64)
        at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:605)
        at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:297)
        at org.apache.solr.search.SolrIndexSearcher.getDocListAndSetNC(SolrIndexSearcher.java:1553)
        at org.apache.solr.search.SolrIndexSearcher.getDocListC(SolrIndexSearcher.java:1300)
        at org.apache.solr.search.SolrIndexSearcher.search(SolrIndexSearcher.java:395)
        at org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:412)
        at org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:208)
        at org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
        at com.commvault.solr.handler.DemoUnClosedSearcher.handleRequestBody(DemoUnClosedSearcher.java:24)
        at org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
The memory usage looks like below: increase constantly.
Solr Admin Console
In Solr Admin console, we can see there are a lot of SolrIndexSearcher instances.
CPU usage
After Java throws OutOfMemoryError, JAVA tries hard to do GC, but no impact at all. As these searchers are kept in SolrCore.searchList forever.
Another example
This example demonstrates what will happen if we forgot to decref Core().getSearcher(). Same thing will happen - the old searcher will not be cleaned instead it will be kept in core.searchList forever.
public class DemoUnClosedSearcher extends RequestHandlerBase {
    public void handleRequestBody(SolrQueryRequest req, SolrQueryResponse rsp)
      throws Exception {
        RefCounted<SolrIndexSearcher> refCounted = req.getCore().getSearcher();
    try {
      SolrIndexSearcher searcher = refCounted.get();
      String qstr = "datatype:4";
      QParser qparser = QParser.getParser(qstr, "lucene", req);
      Query query = qparser.getQuery();
      
      int topn = 1;
      TopDocs topDocs = searcher.search(query, topn);
      for (int i = 0; i < topDocs.totalHits; i++) {
        ScoreDoc match = topDocs.scoreDocs[i];
        Document doc = searcher.doc(match.doc);
        System.out.println(doc.get("contentid"));
      }      
    } finally {
       // Dont' forget to decref efCounted<SolrIndexSearcher>
      //refCounted.decref();
    }
  }
}

Lesson Learned
1. Read the documentation/javadoc when we use some API.
For example in javadoc of SolrQueryRequest, we know it's not thread safe, so we shouldn't share it in multi thread environment, we know we should call its close method explicitly when we no need it any more.

Also for SolrCore.getSearcher(): It must be decremented when no longer needed. SolrCoreState.getIndexWriter: It must be decremented when no longer needed.
2. Use tools like VisualVM to monitor threads and memory usage.
3. Solr uses JMX to monitor its resource. We can check values in SOlr Admin or visualVM.
4. Generate and analyze Heap dump - visualvm or eclipse mat. Use DQL to find the instances, and check values. 

Labels

adsense (5) Algorithm (69) Algorithm Series (35) Android (7) ANT (6) bat (8) Big Data (7) Blogger (14) Bugs (6) Cache (5) Chrome (19) Code Example (29) Code Quality (7) Coding Skills (5) Database (7) Debug (16) Design (5) Dev Tips (63) Eclipse (32) Git (5) Google (33) Guava (7) How to (9) Http Client (8) IDE (7) Interview (88) J2EE (13) J2SE (49) Java (186) JavaScript (27) JSON (7) Learning code (9) Lesson Learned (6) Linux (26) Lucene-Solr (112) Mac (10) Maven (8) Network (9) Nutch2 (18) Performance (9) PowerShell (11) Problem Solving (11) Programmer Skills (6) regex (5) Scala (6) Security (9) Soft Skills (38) Spring (22) System Design (11) Testing (7) Text Mining (14) Tips (17) Tools (24) Troubleshooting (29) UIMA (9) Web Development (19) Windows (21) xml (5)