Servoy memory usage question

Hi, I have a Debian linux server which we use only for Servoy. It has two instances of servoy, one in 32 bit mode and one in 64 bit mode (5.2.14), MySQL and Terracotta. All users are webclients and there are a couple of headless clients running. (Why all on the one box… it’s a long story).

It seems to require a lot more RAM than I’d expect. Can anyone tell me what’s normally expected for RAM usage. The configuration is

Servoy64 with -Xmx3g -Xms1g -XX:MaxPermSize=256m
Servoy32 with -Xmx512m -Xms512m -XX:MaxPermSize=256m
Terracotta with default settings -Xms512m -Xmx512m
MySQL, standard install.
some monitors which use almost nothing, eg jstatd

With 8GB of RAM it would run out of memory and crash from time to time. This seems odd, as we only allocate 4Gb for Servoy and TC.

Increasing the RAM to 12GB stopped the crashes, but I find that even with light webclient load (5-10 WCs) the RAM used slowly goes up to around 10-11GB, seen in this graph, even though the servoy-admin page reports Heap memory used to be low. The first part of the graph shows typical usage. I restarted the cluster around 7am on Sunday, then it slowly came back up to similar levels. On a busy day this would peak very quickly.

[attachment=0]page7image2952-1.png[/attachment]

You can see that it uses 8-10GB of RAM, with 2-5GB of cache and very little free RAM. In the servoy admin panel, reported heap memory usually shows plenty of free RAM e.g.

System Information
Heap memory: allocated=1004928K, used=349316K, max=2796224K
None Heap memory: allocated=99520K, used=81911K, max=311296K

htop snippet just after a restart shows Servoy taking most of the RAM

 PID USER     PRI  NI  VIRT   RES   SHR S CPU% MEM%   TIME+  Command
25676 servoy    20   0 5123M 2489M 17148 S 116. 20.7 11:24.35 /bin/sh /home/servoy/terracotta/platform/bin/dso-java.sh -Dtc.config=
22501 servoy    20   0 2857M 2464M  5168 S 72.0 20.5 11h26:03 /usr/lib/jvm/ia32-java-6-sun/jre/bin/java -Xbootclasspath/p:/home/ser
22474 servoy    20   0 1277M  671M  5556 S 46.0  5.6 13h01:18 /usr/lib/jvm/java-6-sun/jre/bin/java -server -XX:MaxDirectMemorySize=
 1599 mysql     20   0  327M 84012  4184 S  7.0  0.7 25h12:30 /usr/sbin/mysqld --basedir=/usr --datadir=/var/lib/mysql --user=mysql
 1418 servoy    20   0 3301M 32932  5560 S  0.0  0.3 17:55.79 /usr/bin/jstatd -p 8082 -J-Djava.security.policy=/home/servoy/jstatd/
 4425 servoy    20   0 24968  7780  1560 S  0.0  0.1  0:00.31 -bash

What things should I be looking at to improve memory management?

I would like to see the full commands listed by htop. To see which is which and that the memory settings are listed in there correctly.
I see a dso-java.sh which should be one of the Servoy Servers (why isn’t the other one a ‘dso-java’ as well?). Then there is one 32 bit java command, and then one java command.

You said that htop was ran right after restarting the cluster. I’m looking to see that on the graph (>9 GB virtual (reserved memory for the processes) and >5.5 GB actual physical memory in use)… what do “used” and “cache” memory mean in that graph tool? I would think that ‘cache’ refers to the file cache used by the OS - that is not relevant here as it can be shrunk/freed up for application use when needed. And ‘used’ refers either to the virtual memory or actual physical memory allocated as reported by htop I guess… If it’s the ‘RES’ memory then I see it reaching 5.5 GB after ~2h 30m from the restart on the graph. Or was htop ran just before the restart, not after?

A combination of jvisualvm/jstatd (or jvisualvm directly if you have UI on the server) can show the memory usage of the java application excluding the internal JVM memory usage.

Anyway, you will never get the same values reported for the OS java process and jvisualvm/Servoy admin pages (which are limited by the -X… arguments) because the latter only reports heap/permgen space which is relevant only for the java application that executes and gives no insight into the internal memory usage of the JVM itself (which added would give the COMMIT size of the entire java process not the WORKING SET size which is usually shown by default (talking now in Windows terms, but its similar on linux with VIRT and RES) - for example you might even see lower OS reported WORKING SET memory then what jvisualvm reports as heap size + permgen size). For the things you can configure with the -X… parameters you should only look in jvisualvm (or similar) - that shows info about the application’s memory requirements, but if you are interested in how much RAM the whole process uses from the OS, that will be what jvisualvm reports + some amount used internally by the JVM which is decided by the JVM itself. See http://www.coderanch.com/t/203149/Perfo … anager-JVM

To come back to the initial question. I would like to see the commands as I stated in the beginning + when using the most memory, at that moment:

  • the values reported by htop
  • the values reported for heap/non-heap (used/size) by jvisualvm after triggering a garbage-collection cycle for each of the 3 processes (terracotta/app. server 1/app. server 2)

I did a small test with 2 clustered servers and the servoy_sample_crm solution a while ago to see the memory use trend when starting new clients.
It’s not exactly a real deployment, it’s just an example, but it’s the same idea. I thought you might be interested.

Heap usage monitored running Terracotta 3.7.0 with 2 clustered 6.1.2 Servoy Application Servers, under Windows 7 64 bit with Java 1.6.0_30 64 bit (heap sizes are truncated to MB, and sampled after requesting garbage collection with jvisualvm):

  • after starting the Terracotta server (let’s call it TS) and the 2 clustered Servoy Application Servers (let’s call them SAS1 and SAS2) => TS 13 MB, SAS1 40 MB, SAS2 40 MB
  • after starting a “servoy_sample_crm” web client on SAS1 (and using it a bit) => TS 13 MB, SAS1 47 MB, SAS2 40 MB
  • after starting a “servoy_sample_crm” web client on SAS2 (and using it a bit) => TS 13 MB, SAS1 47 MB, SAS2 47 MB
  • after starting another “servoy_sample_crm” web client on SAS1 (and using it a bit) => TS 13 MB, SAS1 54 MB, SAS2 47 MB

Conclusion: When running clustered with Terracotta and Servoy, the bulk (almost all) of the heap memory used for a client is reserved only on the application server that serves it.
The shared cluster heap’s size - which is kept entirely in the terracotta server process, and parts of it are duplicated/available to particular servoy application server processes as needed - is very small in comparison. It does grow with each new client that connects to any server in the cluster, but the shared info about each client is minimal.

So the Terracotta Server allocated heap increased but very little, while the Servoy Application Server allocated heap sizes increased with about 7 MB per client. Of course, for larger solutions these increments will grow, but the shared heap allocation will continue to be very small in comparison to the heap reserved on the Servoy Application Server servicing the client.

Hi Andrei,
Thanks for the detailed reply.

I’ve attached a screenshot of htop, just after restarting the cluster from the command line, and the Java VisualVM graphs. Perform GC isn’t available from within VisualVM for this remote host.

Please note that we’ve modified the startup scripts to run under Daemontools. I’d manually restarted one of the servoy instances yesterday via the admin panel, which would explain why there was only 1 ‘dso-java’

start-terracotta-server.sh

#! /bin/bash

# Start a local Terracotta server.
# Use this instead of the script provided as part of Terracotta to allow it to run under Daemontools.

exec /usr/lib/jvm/java-6-sun/jre/bin/java -server -XX:MaxDirectMemorySize=9223372036854775807 -Xms1g -Xmx1g -XX:ErrorFile=/home/servoy/terracotta/java_crash_logs/hs_err_pid%p.log -XX:+HeapDumpOnOutOfMemoryError -Dcom.sun.management.jmxremote -Dtc.install-root=/home/servoy/terracotta/ -Dtc.config=/home/servoy/terracotta/tc-config.xml -Dsun.rmi.dgc.server.gcInterval=31536000 -cp /home/servoy/terracotta/lib/tc.jar com.tc.server.TCServerMain

start_servoy_clustered_32.sh

#!/bin/sh

# Start a clustered Servoy instance using 32-bit java.
# Use this instead of the script provided as part of Terracotta to allow it to run under Daemontools.

export JAVA_HOME="/usr/lib/jvm/ia32-java-6-sun/jre"
exec "/home/servoy/terracotta/platform/bin/dso-java.sh" -Dtc.config=/home/servoy/terracotta/tc-config.xml -Dtc.node-name=Servoy32 -Djava.awt.headless=true -Xmx512m -Xms512m -XX:MaxPermSize=256m -XX:ErrorFile=/home/servoy/terracotta/java_crash_logs/hs_err_pid%p.log -classpath .:lib/ohj-jewt.jar:lib/MRJAdapter.jar:lib/compat141.jar:lib/commons-codec.jar:lib/commons-httpclient.jar:lib/activation.jar:lib/antlr.jar:lib/commons-collections.jar:lib/commons-dbcp.jar:lib/commons-fileupload-1.2.1.jar:lib/commons-io-1.4.jar:lib/commons-logging.jar:lib/commons-pool.jar:lib/dom4j.jar:lib/help.jar:lib/jabsorb.jar:lib/hibernate3.jar:lib/j2db.jar:lib/j2dbdev.jar:lib/jdbc2_0-stdext.jar:lib/jmx.jar:lib/jndi.jar:lib/js.jar:lib/jta.jar:lib/BrowserLauncher2.jar:lib/jug.jar:lib/log4j.jar:lib/mail.jar:lib/ohj-jewt.jar:lib/oracle_ice.jar:lib/server-bootstrap.jar:lib/servlet-api.jar:lib/wicket-extentions.jar:lib/wicket.jar:lib/wicket-calendar.jar:lib/slf4j-api.jar:lib/slf4j-log4j.jar:lib/joda-time.jar:lib/rmitnl.jar:lib/networktnl.jar com.servoy.j2db.server.ApplicationServer "$@" 1>> server.log 2>> server.log

start_servoy_clustered_64.sh

#!/bin/sh

# Start a clustered Servoy instance using 64-bit java.
# Use this instead of the script provided as part of Terracotta to allow it to run under Daemontools.

export JAVA_HOME="/usr/lib/jvm/java-6-sun/jre"
exec "/home/servoy/terracotta/platform/bin/dso-java.sh" -Dtc.config=/home/servoy/terracotta/tc-config.xml -Dtc.node-name=Servoy64 -Djava.awt.headless=true -Xmx3g -Xms1g -XX:MaxPermSize=256m -XX:ErrorFile=/home/servoy/terracotta/java_crash_logs/Servoy64/hs_err_pid%p.log -classpath .:lib/ohj-jewt.jar:lib/MRJAdapter.jar:lib/compat141.jar:lib/commons-codec.jar:lib/commons-httpclient.jar:lib/activation.jar:lib/antlr.jar:lib/commons-collections.jar:lib/commons-dbcp.jar:lib/commons-fileupload-1.2.1.jar:lib/commons-io-1.4.jar:lib/commons-logging.jar:lib/commons-pool.jar:lib/dom4j.jar:lib/help.jar:lib/jabsorb.jar:lib/hibernate3.jar:lib/j2db.jar:lib/j2dbdev.jar:lib/jdbc2_0-stdext.jar:lib/jmx.jar:lib/jndi.jar:lib/js.jar:lib/jta.jar:lib/BrowserLauncher2.jar:lib/jug.jar:lib/log4j.jar:lib/mail.jar:lib/ohj-jewt.jar:lib/oracle_ice.jar:lib/server-bootstrap.jar:lib/servlet-api.jar:lib/wicket-extentions.jar:lib/wicket.jar:lib/wicket-calendar.jar:lib/slf4j-api.jar:lib/slf4j-log4j.jar:lib/joda-time.jar:lib/rmitnl.jar:lib/networktnl.jar com.servoy.j2db.server.ApplicationServer "$@" 1>> server.log 2>> server.log

and htop

The VIRT memory in htop shouldn’t worry you (so maybe I shouldn’t have mentioned it in the first post - thought it was something more measurable); the RES it what is actually used: more info here.

You can use the free command to check the actual used/free memory (second line in the output). more info here. These used/cache/buffers might actually be the values visible on the memory graph from your first post… At the end of the graph about 7.5 GB were used for 4.5 GB max java application heap size + > 0.5 GB max PermGen size + thread stack space + internal JVM memory. But then again at the beginning you have 10 GB used…

I guess you ran htop before getting the jvisualvm graphs - which would explain the small memory differences (the heap size was decreasing in time in 64 bit Servoy Application Server and used heap was climbing (=> heap size went up as well probably) in the 32 bit Servoy Application Server).

So

|---------------------------------------------------------------------------------------------------------
| (MB)                              |   HTOP USED MEMORY (RES)  |       JVISUALVM (APROX from Graphs)    |
|                                   |                           |  USED HEAP | HEAP SIZE | MAX HEAP SIZE |
|---------------------------------------------------------------------------------------------------------
| Terracotta Server                 |          278              |   10-300   |    1024   |     1024      |
| Servoy Application Server 32bit   |          486              |  150-300   |     512   |      512      |
| Servoy Application Server 64bit   |         2121              |  300-625   |    1024   |     3072      |
----------------------------------------------------------------------------------------------------------
(HEAP SIZE was ~1750 for 64 bit in the beginning when htop was probably run, so it's ok)

I put intervals instead of values for USED HEAP as what you probably see in there is memory going up and down between garbage collection cycles, so most likely the minimum value is the actual used objects space, the rest being yet uncollected garbage. The total OS/RAM memory used by the cluster was < 3GB and decreasing probably at that time.

So the HTOP RES is roughly similar to the HEAP SIZE (above it) for the two Servoy Application Server processes and used Java heap is within the limitations imposed with room to spare (both in increasing heap size and in available free heap size). I suspect that for the Terracotta Server process, the JVM allocated memory differently because of the -server argument and the fixed heap size. In this case RES is close to the actual used heap… So RES in htop and used heap (+ what you see in the PermGen tab) in jvisualvm are the most relevant values. (with the mention that RES might include unused java heap as well and it includes for sure non-heap memory of the application + memory used internally by the JVM)

Conclusion: I don’t see problems yet. I would be curious to see all this measured again at the moment when you have 10 GB of used memory in your system as reported by free command.

Total used Java object heap was probably < 500 MB (not counting garbage).

It’s running at about 10G now. I increased the RAM for TC to 1GB to see if this fixed a data broadcasting issue (see another post https://www.servoy.com/forum/viewtopic.php?f=5&t=19034 in this forum).

~$ free
             total       used       free     shared    buffers     cached
Mem:      12334848   10706828    1628020          0     130568    1197592
-/+ buffers/cache:    9378668    2956180
Swap:      1949688      67136    1882552

My main question is, with x GB of RAM, what are reasonable values for -Xmx so I won’t have the server falling over. I would’ve thought at 12GB of RAM, I should be able to allocate 5GB to Servoy64, but it fell over with those settings so I dropped it to 3GB.

Can you post what jvisualvm reports as well?

VisualVM screen shots

and free at about the same time

             total       used       free     shared    buffers     cached
Mem:      12334848   10424328    1910520          0     133112    1203820
-/+ buffers/cache:    9087396    3247452
Swap:      1949688      67036    1882652

and htop at about the same time

What’s wrong with that last htop? It lists looots of almost identical java processes…

Andrei Costescu:
What’s wrong with that last htop? It lists looots of almost identical java processes…

Ah, forgot to hide userland processes

Also it doesn’t seem to be the same server. TC Server and one of the app. servers are running for 12 + hours already.
What you have shown earlier is from another server, right?

I see the overall free memory increased by 300 MB in the 20 or so min between the last 2 free commands (so the htop shows slightly higher values at that time).
Here is the new situation based on the htop before the jvisualvm screenshots:

----------------------------------------------------------------------------------------------------------
| (MB)                              |   HTOP USED MEMORY (RES)  |       JVISUALVM (APROX from Graphs)    |
|                                   |                           |  USED HEAP | HEAP SIZE | MAX HEAP SIZE |
|---------------------------------------------------------------------------------------------------------
| Terracotta Server                 |         1269              |  500-700   |    1024   |     1024      |
| Servoy Application Server 32bit   |         2053              |  250-350   |     512   |      512      |
| Servoy Application Server 64bit   |         5477              |  850-900   |    2160   |     3072      |
----------------------------------------------------------------------------------------------------------

So used Java heap is 1600 - 1950 MB. Java heap sizes sum up to ~3700 MB. Yet the used RAM reported by htop is > 8.7GB… The remaining off-heap 5 GB seems too much to me (for thread/permgen/internal memory).

I wonder what the JVM stores in there… is it extreme optimisations or is it a bug… Did you try newer JVM versions (e.g. latest 1.6)? I’ll also post a question on the Terracotta forums to see if Terracotta might have something to do with it (maybe something to do with DGC).

What are the values of permgen reported in jvisualvm (just values I don’t think screenshots are needed for this)? I guess they are under the max 256 MB / process.

Do you have a crash log? Which of the 3 processes crashes eventually?

What is the Terracotta version?

Here’s a fresh lot of data, all taken at ~7:23 definitely all from the same server (I only have 1 linux server running at the moment)

             total       used       free     shared    buffers     cached
Mem:      12334848   12198620     136228          0     295340    2308332
-/+ buffers/cache:    9594948    2739900
Swap:      1949688      58576    1891112

htop
[attachment=0]Screen Shot 2012-11-14 at 7.22.56 AM.png[/attachment]

VisualVM graphs at the same time, low load on the server, low memory use reported within Servoy, but almost no free RAM reported with free or htop.

Servoy32 PermGen Memory
Size: 111,017,984 B
Used: 90,002,912 B
Max: 268,435,456 B

Servoy64 PermGen Memory
Size: 108,199,936 B
Used: 108,195,728 B
Max: 268,435,456 B

TC PermGen Memory

Size: 38,928,384 B
Used: 37,193,592 B
Max: 85,983,232 B

and

JVM Information (Servoy64)
java.vm.name=Java HotSpot™ 64-Bit Server VM
java.version=1.6.0_26
java.vm.info=mixed mode
java.vm.vendor=Sun Microsystems Inc.

JVM Information (Servoy32)
java.vm.name=Java HotSpot™ Server VM
java.version=1.6.0_26
java.vm.info=mixed mode
java.vm.vendor=Sun Microsystems Inc.

So it’s basically the same. Used Java heap low, used OS memory very high. I’ll need to run some stuff locally and see if I get the same but it really seems to be JVM (or Terracotta) related. It’s like the JVM forgets to return memory back to the OS. I’m not sure that this causes the crashes though - if the Java heap can use this strange already allocated space when it needs to.

What version of Terracotta do you use? Any crash logs so that we can see which one crashed exactly and why?

It can be seen in the Terracotta server heap (that seems to be slowly but constantly increasing towards the max) that it didn’t yet run the distributed garbage collector. I think by default, without any tuning it will run once a day. I mentioned in the other thread how this can be improved with the 2 settings.

Hello again Andrei,

From the log, its Terracotta 3.7.0

We haven’t had any servoy application crashes since increasing the RAM to 12GB, only at 8GB a few weeks ago. You’d think I should be able to allocate more than 3G out of the 12G to Servoy64. I’ll increase the Xmx setting for Servoy64 tonight and see what that does.

Looking at TC RAM usage in Visual VM, it appear to increase until it hits 90% eg around 11:27 then does a cleanup, which seems to take about 10 minutes…

2012-11-14 11:21:28,374 [Statistics Logger] INFO com.terracottatech.dso - memory free : 177.960243 MB
2012-11-14 11:21:28,380 [Statistics Logger] INFO com.terracottatech.dso - memory used : 833.039757 MB
2012-11-14 11:21:28,381 [Statistics Logger] INFO com.terracottatech.dso - memory max : 1011.000000 MB
2012-11-14 11:27:42,026 [TC Memory Monitor] WARN tc.operator.event - NODE : localhost:9510  Subsystem: MEMORY_MANAGER Message: Current Memory usage(94%) crossed critical threshold(90%). 
2012-11-14 11:36:28,374 [Statistics Logger] INFO com.terracottatech.dso - memory free : 653.333107 MB
2012-11-14 11:36:28,374 [Statistics Logger] INFO com.terracottatech.dso - memory used : 332.104393 MB
2012-11-14 11:36:28,374 [Statistics Logger] INFO com.terracottatech.dso - memory max : 985.437500 MB
2012-11-14 11:36:36,582 [Server Map Periodic Evictor] INFO tc.operator.event - NODE : localhost:9510  Subsystem: DCV2 Message: DCV2 Eviction - Time taken (msecs)=0, Number of entries evicted=0, Number of segments over threshold=0, Total Overshoot=0
2012-11-14 11:36:36,582 [Server Map Periodic Evictor] INFO com.tc.objectserver.impl.ServerMapEvictionStatsManager - Server Map Periodic eviction - Time taken (msecs): 0, Number of segments under threshold: 0, Number of segments over threshold: 0, Total overshoot: 0, Total number of samples requested: 0, Number of segments where eviction happened: 0, Total number of evicted entries: 0

I can try forcing the GC more often with your tips, but if it is happening anyway will it help?

[attachment=0]Screen Shot 2012-11-14 at 12.16.58 PM.png[/attachment]

The DGC tuning could help avoid TC server disk caching - which might slow down the TC server a bit (if it tends to go over Xmx). I don’t think that it takes 10 min for that DGC to run, it’s just around 10 min between the memory logger triggers. You can enable DGC logging as well - see the link for modifying the DGC interval on the other thread. Btw. the DGC only collects shared objects that have already been collected by GCs in the JVMs that used them, so DGC and JVM GCs should be tuned together. No use to have DGC clean every 10 min but JVM GCs clean much less often.

I thought you got crashes with 12 GB or RAM as well. If I remember correctly from the other discussion the 8 GB RAM crash was when you had configured Xmx (5G + 512M + 512M) + max permGen (~400M), so around 6 GB for the whole cluster and you already had up to 2 GB used. So then you were really at the limit with Java heap and permGen only, without taking into account other memory needs of the JVMs.

Yes, you should be able to increase the Xmx setting for Servoy 64. Let us know how it goes.