Java Network Miscs

Java Network Miscs

About Too many open files

The number of open files allowed is an Operating System dependent setting. This is a limited resource that is shared by all processes on the machine. The standard programming practice is to make sure to close all files that the program opens. On Unix systems this applies to socket connections as well.

A very common mistake is to assume that the runtime garbage collector (GC) will look after resources in the same way that it looks after memory references. However, wshile the memory associated with the object itself is collected by the GC, the resources associated with that object may not be cleaned up.

There are at least two types of resource that are collected under the stream object:

* Memory associated with the stream object itself, and with managing state within the stream

* An operating system file descriptor or handle that relates to the underlying file within the file system

The GC will take care of the first aspect while leaving the underlying descriptor open, thus consuming valuable system resources over time and eventually resulting in a denial-of-service scenario.

when use Runtime.getProcess() method to get a process and then exec() an external script,

API automatically opens three streams (stdout, stderr, stdin) each time the getProcess() is called. It is the responsibility of the caller to close those streams when done.


see sample code for how to close. Scan for "close".

Avoid use nameless InputStream/OutputStream, instead explicitly close it.

Properties prop = new Properties(); prop.load(new FileInputStream(propertyFile));

The specified stream remains open after this method returns.

From Elliote Rusty Harold's Java IO:

Not all streams need to be closed -- byte array output streams do not need to be closed, for example. However, streams associated with files and network connections should always be closed when you're done with them.

For example, if you open a file for writing and neglect to close it when you're through, then other processes may be blocked from reading or writing to that file.

Does closing a BufferedOutputStream also close the underlying OutputStream?

Yes, it closes it.

The behaviour is actually inherited from FilterOutputStream. The Javadocs for for FilterOutputStream.close state:

Closes this output stream and releases any system resources associated with the stream.

The close method of FilterOutputStream calls its flush method, and then calls the close method of its underlying output stream.

BufferedOutputStream extends FilterOutputStream, Closing the BufferedOutputStream will also close the underlying OutputStream. You should close the BufferedOutputStream not underlying OutputStream so that it flushes its contents before closing the underlying stream.

Notes from Fundamental Networking in Java:

Closing a connected socket

In fact there are several ways to close a socket:

(a) close the socket itself with Socket.close();

(b) close the output stream obtained from the socket by calling the method Socket.getOutputStream().close().

(c) close the input stream obtained from the socket by calling the method Socket.getIutputStream().close().

Any one of these is sufficient, and exactly one of them is necessary, to close the socket and release all its resources. You can't use more than one of these techniques on any given socket. As a general rule you should close the output stream rather than the input stream or the socket, as the output stream may require flushing.

Object Stream Deadlock

The following code fragment will always deadlock if present at both client and server:

ObjectInputStream in = new ObjectInputStream(socket.getInputStream());

ObjectOutputStream out = new ObjectOutputStream(socket.getOutputStream());

The ObjectInputStream constructor calls readStreamHeader to read and verifies the header and version written by the corresponding ObjectOutputStream.writeStreamHeader method. The ObjectInputStream constructor blocks until it completes reading the serialization stream header. Code which waits for an ObjectInputStream to be constructed before creating the corresponding ObjectOutputStream for that stream will deadlock, since the ObjectInputStream constructor will block until a header is written to the stream, and the header will not be written to the stream until the ObjectOutputStream constructor executes.

So Always create an ObjectOutputStream before an ObjectInputStream for the same socket.

Socket Output

All output operations on a TCP socket are synchronous as far as the local send buffer is concerned, and asynchronous as far as network and the remote application are concerned.

After writing to a socket, there is no assurance that the data has been received by the application (or TCP) at the other end. The only way an application can be assured that a data transmission has arrived at the remote application is by receiving an acknowledgement explicitly sent by the remote application.

It is best to attach a BufferedOutputStream to the output stream obtained from the socket.Ideally the BufferedOutputStream’s buffer should be as large as the maximum request or response to be transmitted, if this is knowable in advance and not unreasonably large; otherwise it should be at least as large as the socket’s send-buffer. This minimises context-switches into the kernel, and it gives TCP more data to write at once, allowing it to form larger segments and use the network more efficiently. It also minimizes switching back and forth between the JVM and JNI. You must flush the buffer at appropriate points, ie, after completing the writing of a request message and before reading the reply, to ensure that any data in the BufferedOutputStream’s buffer gets to the socket.

Ideally the BufferedIunputStream’s buffer should be at least as large as the socket’s receive buffer so that the receive buffer is drained as quickly as possible.

Don't mix-use readObject/writeObject and readUnshared/writeUnshared.

os.writeObject("Hello World");


os.writeObject("Hello World");

what gets written to the file is something like this:


for the third entry, it just writes a special flag which says use the same Object as found at position 1

in this object stream. It's bascially doing a compression for you of the data being written.

Using writeUnshared, you get something like this:


All this means that you can't mix readObject/writeObject and readUnshared/writeUnshared.

If you want to readUnshared then you must writeUnshared.

If you write them without unshared (using the compression) and then readUnshared (as if it did not have the compression) you are going to have problems because it's going to hit that flag and not know what to do.

This would output error like: cannot read back reference as unshared






writeUnshared, then readObject is ok.

BufferedIutputStream/BufferedOutputStream just add a local buffer to improve network read/write performance, and would not affect remote side.
So one side can use buffered stream and remote side still can use un-buffered stream.
This can happen when you update existing code, thus cause different code version between two nodes.


Fundamental Networking in Java

Post a Comment


Java (159) Lucene-Solr (110) All (60) Interview (59) J2SE (53) Algorithm (37) Eclipse (35) Soft Skills (35) Code Example (31) Linux (26) JavaScript (23) Spring (22) Windows (22) Web Development (20) Tools (19) Nutch2 (18) Bugs (17) Debug (15) Defects (14) Text Mining (14) J2EE (13) Network (13) PowerShell (11) Chrome (9) Continuous Integration (9) How to (9) Learning code (9) Performance (9) UIMA (9) html (9) Design (8) Dynamic Languages (8) Http Client (8) Maven (8) Security (8) Trouble Shooting (8) bat (8) blogger (8) Big Data (7) Google (7) Guava (7) JSON (7) Problem Solving (7) ANT (6) Coding Skills (6) Database (6) Scala (6) Shell (6) css (6) Algorithm Series (5) Cache (5) IDE (5) Lesson Learned (5) Miscs (5) Programmer Skills (5) System Design (5) Tips (5) adsense (5) xml (5) AIX (4) Code Quality (4) GAE (4) Git (4) Good Programming Practices (4) Jackson (4) Memory Usage (4) OpenNLP (4) Project Managment (4) Python (4) Spark (4) Testing (4) ads (4) regular-expression (4) Android (3) Apache Spark (3) Become a Better You (3) Concurrency (3) Eclipse RCP (3) English (3) Firefox (3) Happy Hacking (3) IBM (3) J2SE Knowledge Series (3) JAX-RS (3) Jetty (3) Restful Web Service (3) Script (3) regex (3) seo (3) .Net (2) Android Studio (2) Apache (2) Apache Procrun (2) Architecture (2) Batch (2) Build (2) Building Scalable Web Sites (2) C# (2) C/C++ (2) CSV (2) Career (2) Cassandra (2) Distributed (2) Fiddler (2) Google Drive (2) Gson (2) Html Parser (2) Http (2) Image Tools (2) JQuery (2) Jersey (2) LDAP (2) Life (2) Logging (2) Software Issues (2) Storage (2) Text Search (2) xml parser (2) AOP (1) Application Design (1) AspectJ (1) Bit Operation (1) Chrome DevTools (1) Cloud (1) Codility (1) Data Mining (1) Data Structure (1) ExceptionUtils (1) Exif (1) Feature Request (1) FindBugs (1) Greasemonkey (1) HTML5 (1) Httpd (1) I18N (1) IBM Java Thread Dump Analyzer (1) JDK Source Code (1) JDK8 (1) JMX (1) Lazy Developer (1) Mac (1) Machine Learning (1) Mobile (1) My Plan for 2010 (1) Netbeans (1) Notes (1) Operating System (1) Perl (1) Problems (1) Product Architecture (1) Programming Life (1) Quality (1) Redhat (1) Redis (1) Review (1) RxJava (1) Solutions logs (1) Team Management (1) Thread Dump Analyzer (1) Troubleshooting (1) Visualization (1) boilerpipe (1) htm (1) ongoing (1) procrun (1) rss (1)

Popular Posts