Network errors, Smart Client

Hi all,

We are having a lot of network errors after upgrading to Servoy 8.2.1.

Server Information
Servoy version 8.2.1 -releaseNumber 3105
Port used by RMI Registry: 1099
Repository version 49 

JVM Information
java.vm.name=Java HotSpot(TM) 64-Bit Server VM
java.version=1.8.0_144
java.vm.info=mixed mode
java.vm.vendor=Oracle Corporation 

Operating System Information
os.name=Mac OS X
os.version=10.11.6
os.arch=x86_64 

System Information
Code Cache Non-heap memory: allocated=56256K, used=55717K, max=245760K
Metaspace Non-heap memory: allocated=88064K, used=86104K, max=204800K
Compressed Class Space Non-heap memory: allocated=10752K, used=10246K, max=1048576K
Heap memory: allocated=1650688K, used=1240625K, max=2919424K
Number of Processors: 8

We were running ‘http&socket’, but now I have switched to ‘http’ to see if this would help.

The errors mostly look like these:

Error flushing message buffer to client %XXXX...	 	 
Exception, see log file for full details: java.rmi.ConnectException: Connection refused to host: 10.0.0.4; nested exception is:
ERROR	com.sebster.tunnel.impl.od	multiplexer failed for client XXXX-XXX....	 
Exception, see log file for full details: java.io.IOException: Error in read

How do I go about getting rid of these?

What do I need to tell the guys running the network?
What can I do by tweaking the server settings?

Thanks,

An update. I suspect Servoy is innocent, it is just a victim of networking issues. The interesting thing is that Servoy behaves normally to some external users, but is dead slow on the LAN. They are looking at the network…

Hi Christian,

i’m on Servoy 7.4.10 using http&socket and have the same error logs:

2017-12-04 18:10	pool-2-thread-1822	ERROR	com.servoy.j2db.datasource.ClientManager	Error flushing message buffer to client XYZ	 	 
java.rmi.ConnectException: Connection refused to host: 192.168.1.38; nested exception is: 
    	java.net.ConnectException: no multiplexer found for server with id=YYY
    	at sun.rmi.transport.tcp.TCPEndpoint.newSocket(TCPEndpoint.java:601) 
    	at sun.rmi.transport.tcp.TCPChannel.createConnection(TCPChannel.java:198) 
    	at sun.rmi.transport.tcp.TCPChannel.newConnection(TCPChannel.java:184) 
    	at sun.rmi.server.UnicastRef.invoke(UnicastRef.java:110) 
    	at java.rmi.server.RemoteObjectInvocationHandler.invokeRemoteMethod(RemoteObjectInvocationHandler.java:178) 
    	at java.rmi.server.RemoteObjectInvocationHandler.invoke(RemoteObjectInvocationHandler.java:132) 
    	at com.sun.proxy.$Proxy34.isAlive(Unknown Source) 
    	at com.servoy.j2db.server.dataprocessing.ClientProxy.Zc(ClientProxy.java:84) 
    	at com.servoy.j2db.server.dataprocessing.Zo.run(Zo.java:17) 
    	at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) 
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) 
    	at java.lang.Thread.run(Thread.java:662) 
    Caused by: java.net.ConnectException: no multiplexer found for server with id=YYY 
    	at com.sebster.tunnel.impl.hd.createSocket(hd.java:12) 
    	at com.servoy.j2db.server.rmi.tunnel.WrappingCompressingRMIClientSocketFactory.createSocket(WrappingCompressingRMIClientSocketFactory.java:28) 
    	at sun.rmi.transport.tcp.TCPEndpoint.newSocket(TCPEndpoint.java:595) 
    	... 11 more

I don’t know whats the cause for these erros or how can i fix them.
We have these errors not only on one customer, but all have these error messages.
Maybe it’s caused by standby mode?

Any other developers with this error logs?

Thanks,
Alex

Hi Alex,

I have now switched back to http&socket as http did not make much difference.
All the network errors means the Servoy server is struggling, using lots of heap memory.

The networking guys are looking at the internal network today, firewall, routers etc – performance is very poor.

Thanks for your feedback.
It would be nice if you could post the results of the networking guys.

A router connecting two parts of the office was playing up and they restarted it.
So far, things look a lot better…

So you have no more error messages in your Servoy log?

We still have the same messages, but fewer of them. About 30 Smart Clients connected.

There is another issue, the Server claims to run out of heap memory and restarts about once a day.

INFO   | jvm 1    | 2017/12/06 09:24:33 | Dec 06, 2017 9:24:33 AM org.apache.coyote.http11.Http11Processor service
INFO   | jvm 1    | 2017/12/06 09:24:33 | INFO: Error parsing HTTP request header
INFO   | jvm 1    | 2017/12/06 09:24:33 |  Note: further occurrences of HTTP header parsing errors will be logged at DEBUG level.
INFO   | jvm 1    | 2017/12/06 09:24:33 | java.lang.IllegalArgumentException: Invalid character found in method name. HTTP method names must be tokens
INFO   | jvm 1    | 2017/12/06 09:24:33 | 	at org.apache.coyote.http11.Http11InputBuffer.parseRequestLine(Http11InputBuffer.java:417)
INFO   | jvm 1    | 2017/12/06 09:24:33 | 	at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:667)
INFO   | jvm 1    | 2017/12/06 09:24:33 | 	at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:66)
INFO   | jvm 1    | 2017/12/06 09:24:33 | 	at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:789)
INFO   | jvm 1    | 2017/12/06 09:24:33 | 	at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1437)
INFO   | jvm 1    | 2017/12/06 09:24:33 | 	at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49)
INFO   | jvm 1    | 2017/12/06 09:24:33 | 	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
INFO   | jvm 1    | 2017/12/06 09:24:33 | 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
INFO   | jvm 1    | 2017/12/06 09:24:33 | 	at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
INFO   | jvm 1    | 2017/12/06 09:24:33 | 	at java.lang.Thread.run(Thread.java:748)
INFO   | jvm 1    | 2017/12/06 09:24:33 | 
INFO   | jvm 1    | 2017/12/06 10:26:06 | Dec 06, 2017 10:26:06 AM sun.rmi.transport.tcp.TCPTransport$AcceptLoop executeAcceptLoop
INFO   | jvm 1    | 2017/12/06 10:26:06 | WARNING: RMI TCP Accept-1099: accept loop for ServerSocket[addr=null,localport=0] throws
INFO   | jvm 1    | 2017/12/06 10:26:06 | java.lang.OutOfMemoryError: unable to create new native thread
STATUS | wrapper  | 2017/12/06 10:26:06 | Filter trigger matched.  Restarting JVM.
INFO   | jvm 1    | 2017/12/06 10:26:06 | 	at java.lang.Thread.start0(Native Method)
INFO   | jvm 1    | 2017/12/06 10:26:06 | 	at java.lang.Thread.start(Thread.java:717)
INFO   | jvm 1    | 2017/12/06 10:26:06 | 	at java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:957)
INFO   | jvm 1    | 2017/12/06 10:26:06 | 	at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1378)
INFO   | jvm 1    | 2017/12/06 10:26:06 | 	at sun.rmi.transport.tcp.TCPTransport$AcceptLoop.executeAcceptLoop(TCPTransport.java:415)
INFO   | jvm 1    | 2017/12/06 10:26:06 | 	at sun.rmi.transport.tcp.TCPTransport$AcceptLoop.run(TCPTransport.java:372)
INFO   | jvm 1    | 2017/12/06 10:26:06 | 	at java.lang.Thread.run(Thread.java:748)
INFO   | jvm 1    | 2017/12/06 10:26:06 | 
INFO   | jvm 1    | 2017/12/06 10:26:06 | java.lang.reflect.InvocationTargetException
INFO   | jvm 1    | 2017/12/06 10:26:06 | 	at sun.rmi.transport.tcp.TCPTransport$AcceptLoop.continueAfterAcceptFailure(TCPTransport.java:499)
INFO   | jvm 1    | 2017/12/06 10:26:06 | 	at sun.rmi.transport.tcp.TCPTransport$AcceptLoop.executeAcceptLoop(TCPTransport.java:474)
INFO   | jvm 1    | 2017/12/06 10:26:06 | 	at sun.rmi.transport.tcp.TCPTransport$AcceptLoop.run(TCPTransport.java:372)
INFO   | jvm 1    | 2017/12/06 10:26:06 | 	at java.lang.Thread.run(Thread.java:748)
INFO   | jvm 1    | 2017/12/06 10:26:06 | Caused by: java.lang.OutOfMemoryError: unable to create new native thread
STATUS | wrapper  | 2017/12/06 10:26:06 | Filter trigger matched.  Restarting JVM.
INFO   | jvm 1    | 2017/12/06 10:26:06 | 	at java.lang.Thread.start0(Native Method)
INFO   | jvm 1    | 2017/12/06 10:26:06 | 	at java.lang.Thread.start(Thread.java:717)
INFO   | jvm 1    | 2017/12/06 10:26:06 | 	at java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:957)
INFO   | jvm 1    | 2017/12/06 10:26:06 | 	at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1378)
INFO   | jvm 1    | 2017/12/06 10:26:06 | 	at sun.rmi.transport.tcp.TCPTransport$AcceptLoop.executeAcceptLoop(TCPTransport.java:415)
INFO   | jvm 1    | 2017/12/06 10:26:06 | 	... 2 more
INFO   | jvm 1    | 2017/12/06 10:26:06 | Exception in thread "MessageScheduler"

This would never happen to the 7.4.x server and has not happened to a 8.2.1 server I have running in Windows 2008. I allocated 3GB, but it does not help, we never see the heap go that high.

Then I read you would get the exactly same error if the computer runs out of threads. And allocating lots of heap memory makes the problem slightly worse, so reduced heap memory to 2.5GB last night. Right now the network settings are:

java.rmi.server.hostname: 127.0.0.1		
servoy.rmiStartPort: 	1099	
rmi.connection.timeout: 	60	
ApplicationServer.pingDelay: 	60	
SocketFactory.tunnelConnectionMode: http&socket		
SocketFactory.compress: 	Yes	
SocketFactory.useSSL: No		
SocketFactory.tunnelUseSSLForHttp: Yes

On a Mac you can run

sysctl kern.num_threads
sysctl kern.num_taskthreads

to see the maximum number of threads and threads per task.
On our Mac Mini it comes back as 10240 and 2048. My MacBook Pro has 20480 and 4096.

I’m trying to work out what is happening on the Servoy server just before it falls over…

I did a fresh install of 8.2.1 on the server last night and while the network errors persist, the memory usage is way down.
all the processes on the server are using about 1,200 threads in total and this seems stable.
The old install was of 8.2.0, later updated to 8.2.1…