Image Transformer (Resize, Rotate, flip and Enhance)


check a file on the web...
Image URL:

or check a file on your local disk

or drop files here

Transform Image:
Resize: New Width:     New Height: Stretch:
Rotate: Rotate Degree: 90    180    270   
Horizontal Flip
Vertical Flip
Enhance Iamge


This server side is deployed on GAE, it uses GAE's ImagesService to resize, rotate, flip and enhance images.
GAE Images Java API Overview

Using ResultSpecification to Filter Annotator to Boost Opennlp UIMA Performance

The Problem:
We use opennlp-uima to extract entities such as person, organization, location, date, time, money, percentage. But in most cases, client just wants to extract one or several kinds of entities: for example just person and location.

In OpenNlpTextAnalyzer.pear, it will run all 12 annotators in sequence. This is not good from performance perspective. Check flowConstraints/fixedFlow definition in OpenNlpTextAnalyzer.xml:

We want to opennlp-uima to only run needed annotators to boost its performance.

The solution: Using ResultSpecification
UIMA's descriptors include a section under the XML capabilities element where the descriptor may specify inputs and outputs.  These end up informing the ResultSpecification which is provided to the annotator.  The ResultSpecification can be queried by the annotator code to see what the annotator ought to produce.

PersonTitleAnnotator and TutorialDateTime in uimaj-examples project uses ResultSpecification to check whether it need run the annotator to boost the performance:

public void process(CAS aCAS) throws AnalysisEngineProcessException {
    // If the ResultSpec doesn't include the PersonTitle type, we have nothing to do.
    if (!getResultSpecification().containsType("example.PersonTitle",aCAS.getDocumentLanguage())) {
      if (!warningMsgShown) {
        logger.log(Level.WARNING, m);
        warningMsgShown = true;
      }
      return;
    }
}
We need make the following change to make opennlp-uima to honor ResultSpecification to filter annotators.
1. Update Annotator's analysisEngineDescription outputs to reflect its capabilities
Take PersonNameFinder.xml as an exmple: we need add opennlp.uima.Person like below:
Do simliar change in these files: PersonNameFinder.xml, LocationNameFinder.xml, OrganizationNameFinder.xml, DateNameFinder.xml, TimeNameFinder.xml, MoneyNameFinder.xml, PercentageNameFinder.xml, PosTagger.xml, Tokenizer.xml,Parser.xml, Chunker.xml.
<capabilities>
  <capability>
    <inputs />
    <outputs>
      <type>opennlp.uima.Person</type>
    </outputs>
    <languagesSupported>
      <language>en</language>
    </languagesSupported>
  </capability>
</capabilities>
Due to a bug in opennlp-uima, we need change NameType in nameValuePair from opennlp.uima.Person to opennlp.uima.Time.
Please refer to Wrong NameType in TimeNameFinder.xml, otherwise the annotator would classify time phrases such as "this afternoon" and "tomorrow morning" as Persons instead of Times.

2. Change Annotator's code to honor ResultSpecification
PersonNameFinder.xml, LocationNameFinder.xml, OrganizationNameFinder.xml, DateNameFinder.xml, TimeNameFinder.xml extends same parent class: opennlp.uima.namefind.AbstractNameFinder. We can change its process method like below:
public final void process(CAS cas) {
 ResultSpecification rs = getResultSpecification();  
 boolean run = rs.containsType(mNameType.getName())
   || rs.containsType(mNameType.getName(),cas.getDocumentLanguage());
 if (!run) {
  return;
 }
  // omitted ....
} 
opennlp.uima.parser.Parser:
public void process(CAS cas) {
    ResultSpecification rs = getResultSpecification();  
 boolean run = rs.containsType("opennlp.uima.Parse") || rs.containsType("opennlp.uima.Parse", cas.getDocumentLanguage());
 if (!run) {
  return;
 }
} 
opennlp.uima.chunker.Chunker:
public void process(CAS tcas) {
 ResultSpecification rs = getResultSpecification();  
 boolean run = rs.containsType("opennlp.uima.Chunk") 
   || rs.containsType("opennlp.uima.Chunk", tcas.getDocumentLanguage());
 if (!run) {
  return;
 } 
}
opennlp.uima.postag.POSTagger:
public void process(CAS tcas) {
 ResultSpecification rs = getResultSpecification();
 boolean run = rs.containsType("opennlp.uima.Token:pos")
   || rs.containsType("opennlp.uima.Token:pos", tcas.getDocumentLanguage());
 if (!run) {
  return;
 }
}  
Change in Client Side
In client side, we need add result type in ResultSpecification when call org.apache.uima.analysis_engine.AnalysisEngine.process(CAS, ResultSpecification):
  ResultSpecification rs = UIMAFramework.getResourceSpecifierFactory()
      .createResultSpecification();
  rs.addResultType("opennlp.uima.Person", true);
  rs.addResultType("opennlp.uima.Location", true);
  this.ae.process(this.cas, rsf);
In our project, we use uima's Regular Expression Annotator to extract entities such as ssn, phone number, credit card etc. We define more than 20 entities and their corresponding regex in its concepts.xml

Resources
UIMA Result Specifications
UIMA References
http://comments.gmane.org/gmane.comp.apache.uima.general/5670

Winmerge: Including subfolders when using the Windows Explorer integration

The Problem
Today, when I use Winmerge's shell integration to compare tow folders, Winmerge just lists files and folders in the first layer, and the tree mode is disabled(grayed out).

The Solution
I can click ctrl+O or click File -> Open to open the "Select Files or Folders" dialogue, this will fill the Left and Right with current value, then I can the mode to "Include Subfolders", then change to Tree mode.

Another way is to: press(hold) Control(ctrl) button while selecting(clicking) the WinMerge or Compare.

This will include subfolders when compare tow folders.

Another way is to go to "Edit" -> "Options", go to "Shell Integration", check "Enable Include subfolders by default".

Personally I like the previuos solution better(press ctrl button).

Resources
WinMerge: Opening files and folders

Exif Viewer: Extract Metadata from Image


check a file on the web...
Image URL:

or check a file on your local disk

or drop files here
List All Meta

This server side is deployed on GAE, and uses java metadata-extractor lib to extract meta data from image files.
https://drewnoakes.com/code/exif/
https://code.google.com/p/metadata-extractor/w/list

Powershell and Java: Stumble by UTF8 BOM or without BOM

The Problem
When I am writing my powershell script to clean csv file to remove invalid records: I mistakenly add -encoding utf8 when using out-file to write response to the final csv.

Then I run the following command to import the csv file to Solr:
http://localhost:8080/solr/update/csv?literal.f1=1&literal.f2=2&&header=true&stream.file=C:\1.csv&literal.cistate=17&commit=true
It will generate a unique id: docid by concatenating f1, f2, and the first column of the csv file: localId.
But to my surprise, there is only one document in solr with docid: 12.
http://localhost:8080/solr/cvcorefla4/admin/luke, it shows:
  <int name="numDocs">1</int>
  <int name="maxDoc">16420</int>
  <int name="deletedDocs">16419</int>
  <long name="version">2521</long>


Run http://localhost:8080/solr/select?q=*, and copy the response to a new file in notepad++ with encoding utf8, everything seems fine, but when I change the file encoding to ascii, it looks like below:
  <str name="docid">12</str>
  <arr name="id">
  <str>f0e662cefe56a31c6eec5d53e64f988d</str>
  </arr>
Notice the messed invisible character before id: id. -  Also the field is not expected string, but array of string.

So I write one simple java application to view the real content in "id":

  public void testUnicode() {
    String str = "id";
    for (int i = 0; i < str.length(); i++) {
      System.out.println(str.charAt(i));
      System.out.println((int) str.charAt(i));
      System.out.println(escapeNonAscii(str.charAt(i) + ""));
    }
    System.out.println("***************");
    System.out.println(str.length());
    System.out.println(str.hashCode());
    System.out.println(escapeNonAscii(str));
    System.out.println("***************");
  }
  private static String escapeNonAscii(String str) {
    
    StringBuilder retStr = new StringBuilder();
    for (int i = 0; i < str.length(); i++) {
      int cp = Character.codePointAt(str, i);
      int charCount = Character.charCount(cp);
      if (charCount > 1) {
        i += charCount - 1; // 2.
        if (i >= str.length()) {
          throw new IllegalArgumentException("truncated unexpectedly");
        }
      }      
      if (cp < 128) {
        retStr.appendCodePoint(cp);
      } else {
        retStr.append(String.format("\\u%x", cp));
      }
    }
    return retStr.toString();
  }
The invisible prefix is \ufeff. U+FEFF is byte order mark (BOM).  So now the problem is kind of obvious: out-file -encoding utf8 it is actually using utf-8 with BOM. But java uses utf8 without bom to read file. This causes the problem: to java the first column in first line is: \ufefflocalId not localId.
The Solution
Actually the fix is simple: the default encoding of out-file is Unicode: which works fine with java. If we are sure all code is in the ascii range, we can also specify -encoding ascii.

Resource
Byte order mark
Unicode Character 'ZERO WIDTH NO-BREAK SPACE' (U+FEFF)

Powershell-Working with CSV: Delete Rows Without Enough Columns

The Problem
We import csv files into Solr server, which is very sensitive with the number of columns. If there is no enough columns, Solr will fail with exception:
org.apache.solr.common.SolrException: CSVLoader: input=file:/C:/1.csv, line=9158554,expected 19 values but got 17

So we would like to have a script to clean the csv: to remove the rows which have no enough data: the number of columns is not 19.

Don't know how to get the number of columns of current record, but it's easier to check whether the value of last field is null: this means exactly no enough columns.

The Solution: Using Powershell
Powershell command would be like below(- the last field is to):

Import-Csv .\1.csv | Where-Object { $_.to -ne $null} | Export-Csv .\rst1.csv -NoTypeInformation


To output which line has no enough columns:
Import-Csv .\1.csv| Foreach-Object {$line = 0} {
   if($_.bcc -eq $null) {
      echo "ignore line: $line, no enough fields"; 
   } else {
     convertto-csv -inputobject $_ -NoTypeInformation | select -Skip 1 | out-file -filepath .\r1.csv -Append 
   }
   $line++
}
The complete script: 
cleanCsv.ps1 Usage: .\cleanCsv.ps1 -filePath .\1.csv -destFilePath .\r1.csv
[CmdletBinding()]
Param(
  [Parameter(Mandatory=$True)]
  [string]$filePath,

  [Parameter(Mandatory=$True)]
  [string]$destFilePath,
  
  [Parameter(Mandatory=$False)]
  [string]$lastField="bcc"
)
# $ignoreLine = 2323533;

Get-Date -format "yyyy-MM-dd HH:mm:ss"
$sw = [Diagnostics.Stopwatch]::StartNew()

If (Test-Path $destFilePath ){
  echo "remove old $destFilePath"
 Remove-Item $destFilePath 
}

gc $filePath -TotalCount 1 | out-file -filepath $destFilePath
Import-Csv $filePath | Foreach-Object {$line = 0} {
  if($_.$lastField -eq $null) {
    echo "ignore line: $line, no enough fields"; 
  } else {
   convertto-csv -inputobject $_ -NoTypeInformation | select -Skip 1 | out-file -filepath $destFilePath -Append 
  }
  $line++
}

$sw.Stop()
Get-Date -format "yyyy-MM-dd HH:mm:ss"
echo "took " $sw.Elapsed

Using Maven with Google App Engine

Maven is very good at managing the project's dependencies, so I also choose maven when develop GAE project.

Google Eclipse plugin doesn't support GAE maven development very well: we can't use Google Eclipse plugin to directly run, debug the app or deploy it to app engine.

To run the app in local GAE server:
cd ${mypp}\${mypp-ear}
mvn -f ..\pom.xml clean install && mvn appengine:devserver

To debug the app: add the following in pom.xml:
<plugins>
  <plugin>
    <groupId>com.google.appengine</groupId>
    <artifactId>appengine-maven-plugin</artifactId>
    <configuration>
      <jvmFlags>
        <jvmFlag>-Xdebug</jvmFlag>
        <jvmFlag>-agentlib:jdwp=transport=dt_socket,address=9999,server=y,suspend=n</jvmFlag>
      </jvmFlags>
      <disableUpdateCheck>true</disableUpdateCheck>
    </configuration>
  </plugin>
</plugins>

Start the local GAE server, then create a remote application to connect to localhost:9999. Now we can debug the GAE maven application in eclipse.

Change Application Id
For some reason, we may want to deploy the same application with multiple application id. - We may use GAE as backbone application, our client application maybe mobile app or even google blogger(as google doesn't allow to put ads in GAE app, we may use google bloger as the front side which talks with GAE server to do real task.).
When our application is getting popular, and exceeds the free quota. We may want to duplicate our applition to deploy under another application id.

If we are using maven to build and deploy, we need change the application id: ${mypp}\${mypp-ear}\src\main\application\META-INF\appengine-application.xml.

Then deploy it to the new application id:
cd ${mypp}\${mypp-ear}
mvn -f ..\pom.xml clean install && mvn appengine:update

Resources

Boilerpipe Demo

Boilerpipe Demo
Welcome to the Web API for the boilerpipe Java library.
boilerpipe provides algorithms to detect and remove the surplus "clutter" (boilerplate, templates) around the main textual content of a web page.

Original Demo is at http://boilerpipe-web.appspot.com/, but it's too popular, and we often meet got "Over Quota" error, so I made this simple demo here for anyone who is interested in Boilerpipe.



Result:  

Blogger: Hide Labels of Small Amount of Posts from Label Widget

The Problem:
Blogger (Blogspot) Label widget allows us to show all labels, or show specific labels. But in some cases, we may want to show all labels except which just has a small amount of posts - such as less than 5. - or show all except some specific labels.
The Solution
We can use JavaScript to hide unwanted labels, but this is kind of inefficient, this logic is better to be implemented in server side. So we should kind of change the implementation of label widget to add a more check.

Go to the dashboard of your blogger, select Template -> select Edit HTML. In the Jump to Widget dropdown list, select Label1. This will jump to the definition of Label1. Look at the code, it basically loop and output all labels. All we have to do is to add a condition, so if the label has less than 5 posts, ignore it.

We can also change the code to ignore some labels, based on its name: data:label.name.

The complete code is like below: 
All we have to do is to add a check: data:label.count >= 5 around the loop:
<b:widget id='Label1' locked='false' title='Labels' type='Label'>
 <b:includable id='main'>
  <b:if cond='data:title'>
   <h2>
    <data:title/>
   </h2>
  </b:if>
  <div expr:class='"widget-content " + data:display + "-label-widget-content"'>
   <b:if cond='data:display == "list"'>
    <ul>
     <b:loop values='data:labels' var='label'>
      <b:if cond='data:label.count >= 5'>          
       <li>
        <b:if cond='data:blog.url == data:label.url'>
         <span expr:dir='data:blog.languageDirection'>
          <data:label.name/>
         </span>
         <b:else/>
         <a expr:dir='data:blog.languageDirection' expr:href='data:label.url'>
          <data:label.name/>
         </a>
        </b:if>
        <b:if cond='data:showFreqNumbers'>
         <span dir='ltr'>(<data:label.count/>)</span>
        </b:if>
       </li>
      </b:if>   
     </b:loop>
    </ul>
    <b:else/>
    <b:loop values='data:labels' var='label'>
     <b:if cond='data:label.count >= 5'>        
      <span expr:class='"label-size label-size-" + data:label.cssSize'>
       <b:if cond='data:blog.url == data:label.url'>
        <span expr:dir='data:blog.languageDirection'>
         <data:label.name/>
        </span>
        <b:else/>
        <a expr:dir='data:blog.languageDirection' expr:href='data:label.url'>
         <data:label.name/>
        </a>
       </b:if>
       <b:if cond='data:showFreqNumbers'>
        <span class='label-count' dir='ltr'>(<data:label.count/>)</span>
       </b:if>
      </span>
     </b:if>    
    </b:loop>
   </b:if>
   <b:include name='quickedit'/>
  </div>
 </b:includable>
</b:widget>

ConEmu failed to startup: command group is empty, choose your shell?

I am a fan of ConEmu, as it allows me to open various console(cmd, powsershell, Cygwin bash) in one place with tabs. Also it can auto save and restore opened tabs.

Today when I reopen ConEmu, it prompts a dialog: "Command group is empty, choose your shell?".

I guess this is because last time the windows was out of power, and ConEmu exited abruptly.

The fix is easy: 
click OK, and in the opened dialog,  Type: C:\Windows\System32\cmd.exe in the Crete new console field, or select ${cmd} from the dropdown list.
Resource: 
ConEmu - The Windows Terminal/Console/Prompt we've been waiting for?

JavaScript: Convert Relative Link in Selected Html to Absolute Path

In the previous post, we get the selected html, but usually we also need convert the relative link  to absolute path.
The Code

Tested in Chrome:
var thisPageUrl = stripQueryStringAndHashFromPath(window.location.href);

function stripQueryStringAndHashFromPath(url) {
 return url.split("?")[0].split("#")[0]; 
}

var hrefPattern = /href="([^"]*)"/;  
function changeToAbsoluteUrl(linkElement)
{
   var outerHTML = linkElement.outerHTML;
   var match= outerHTML.match(hrefPattern);
   if(match!=null)
   {
      var href= match[1];
      linkElement.href= thisPageUrl + href;
   }
}

function parseSelection() {
  var selection = window.getSelection();
  
  if(selection.rangeCount > 0)
  {   // please get source code of getSelectionHtml from 
      // http://lifelongprogrammer.blogspot.com/2014/05/javascript-get-selected-html-in-webpage.html
      var selectedHtml = getSelectionHtml(selection);
      var parser = new DOMParser()
      var selectedDoc = parser.parseFromString(selectedHtml, "text/html");      
      var elements = selectedDoc.getElementsByTagName("a");

      var arrayLength = elements.length;
      for (var i = 0; i < arrayLength; i++) {
       var element = elements[i];
       changeToAbsoluteUrl(element);
      }
      
      selectedHtml = selectedDoc.getElementsByTagName("body")[0].innerHTML;
      console.log("selectedHtml: " + selectedHtml);
  }   
}

parseSelection();

JavaScript: Get the Selected Html in Webpage

The Code
Tested in Chrome:
function getSelectionHtml(selection) {
  var range = selection.getRangeAt(0);
  
  var div = document.createElement('div');
  div.appendChild(range.cloneContents());
  var htmlText = div.innerHTML;
  return htmlText;
}

function parseSelection() {
  var selection = window.getSelection();
  
  if(selection.rangeCount > 0)
  {
      var selectedHtml = getSelectionHtml(selection);
      console.log("selectedHtml: " + selectedHtml);
  }   
}

parseSelection();

Android Studio: No classes.dex in built apk

The Problem: The apk does not include classes.dex
Recently, when I deploy my simple Android application to emulator in Android Studio, it always fails with error: Failure [INSTALL_FAILED_DEXOPT]

Check the logcat log: it shows: "The apk does not include classes.dex".
05-13 16:10:23.740      394-417/system_process W/ActivityManager﹕ No content provider found for permission revoke: 
05-13 16:10:29.150    1055-1055/? W/dalvikvm﹕ DexOptZ: zip archive '/data/app/org.lifelongprogrammer.tools.myapp-1.apk' does not include classes.dex
05-13 16:10:29.160        53-53/? W/installd﹕ DexInv: --- END '/data/app/org.lifelongprogrammer.tools.myapp-1.apk' --- status=0xff00, process failed

I checked the built apk file at \app\build\apk, there is indeed no classes.dex  in the apk file.
Google searched, but didn't didn't solution.
The Workaround: Build the app manually in command line
So I tried to use gradlew.bat to build the app manully: "gradlew.bat clean assembleDebug" then check the built apk file at \app\build\ap, it is larger, and indeed include classes.dex. 

In Android studio, deploy the app to the emulator, it works. Also deploy it to my Android phone, it also works.

I think the problem should be in Android studio. But anyway I can continue to develop my application :)

Dict Demo

This is a practical usage of Jsoup to extract content from webpage.



Result:  

JSoup Demo

This is an example to demonstrate how to use JSoup's selector to extract content form a webpage.













Result:  

Simple Web Proxy

Some companies may block some website for unreasonable reason, for example: block Google Cache site: webcache.googleusercontent.com, mark it as Proxy/Anonymizer. This is a pain at sometime when we have to access google cache. So I wrote this simple tool.

Result:  

Labels

Java (159) Lucene-Solr (112) Interview (61) All (58) J2SE (53) Algorithm (45) Soft Skills (39) Eclipse (33) Code Example (31) Linux (24) JavaScript (23) Spring (22) Windows (22) Web Development (20) Tools (19) Nutch2 (18) Bugs (17) Debug (16) Defects (14) Text Mining (14) J2EE (13) Network (13) Troubleshooting (13) PowerShell (11) Problem Solving (10) Chrome (9) Design (9) How to (9) Learning code (9) Performance (9) UIMA (9) html (9) Http Client (8) Maven (8) Security (8) bat (8) blogger (8) Big Data (7) Continuous Integration (7) Google (7) Guava (7) JSON (7) Shell (7) ANT (6) Coding Skills (6) Database (6) Lesson Learned (6) Programmer Skills (6) Scala (6) Tips (6) css (6) Algorithm Series (5) Cache (5) Dynamic Languages (5) IDE (5) System Design (5) adsense (5) xml (5) AIX (4) Code Quality (4) GAE (4) Git (4) Good Programming Practices (4) Jackson (4) Memory Usage (4) Miscs (4) OpenNLP (4) Project Managment (4) Spark (4) Testing (4) ads (4) regular-expression (4) Android (3) Apache Spark (3) Become a Better You (3) Concurrency (3) Eclipse RCP (3) English (3) Happy Hacking (3) IBM (3) J2SE Knowledge Series (3) JAX-RS (3) Jetty (3) Life (3) Restful Web Service (3) Script (3) regex (3) seo (3) .Net (2) Android Studio (2) Apache (2) Apache Procrun (2) Architecture (2) Batch (2) Bit Operation (2) Build (2) Building Scalable Web Sites (2) C# (2) C/C++ (2) CSV (2) Career (2) Cassandra (2) Distributed (2) Fiddler (2) Firefox (2) Google Drive (2) Gson (2) How to Interview (2) Html Parser (2) Http (2) Image Tools (2) JQuery (2) Jersey (2) LDAP (2) Logging (2) Python (2) Software Issues (2) Storage (2) Text Search (2) xml parser (2) AOP (1) Application Design (1) AspectJ (1) Chrome DevTools (1) Cloud (1) Codility (1) Data Mining (1) Data Structure (1) ExceptionUtils (1) Exif (1) Feature Request (1) FindBugs (1) Greasemonkey (1) HTML5 (1) Httpd (1) I18N (1) IBM Java Thread Dump Analyzer (1) Invest (1) JDK Source Code (1) JDK8 (1) JMX (1) Lazy Developer (1) Mac (1) Machine Learning (1) Mobile (1) My Plan for 2010 (1) Netbeans (1) Notes (1) Operating System (1) Perl (1) Problems (1) Product Architecture (1) Programming Life (1) Quality (1) Redhat (1) Redis (1) Review (1) RxJava (1) Solutions logs (1) Team Management (1) Thread Dump Analyzer (1) Visualization (1) boilerpipe (1) htm (1) ongoing (1) procrun (1) rss (1)

Popular Posts