We have been slowly growing our customer base and getting more clients on our SaaS server. We have an Amazon EC2 Linux instance and we’re using that with Amazon’s RDS database service and have an instance running there.
Lately, we’ve been seeing a spike in the CPU Utilization on the DB instance and this then slows down all of our clients (both smart and web, but mostly smart)
We’ve done our best to make sure our database is indexed correctly and the db’s maintained and optimized routinely.
When this happens, we do see a lot of these errors:
java.sql.SQLException: Cannot convert value ‘0000-00-00 00:00:00’ from column 23 to TIMESTAMP.
We’ve scoured our databases and believe that we’ve rid any datetime field that has the '000-00-00…" as a value.
I have updloaded the most recent current stack / sys info for your entertainment.
you do see spikes on the database server? not the servoy server?
I guess that the moment of that stacktrace, the servoy server doesn’t really do anything, because all i can see is many connections to clients just waiting to do something…
Quickly scanning over the file i can’t even find one client that does a query…
i just see for the most part mysql errors
many of those timestamps, i guess you just have to look what that happens, its a pure mysql/data error, servoy can’t do much about that
Also i see some connections errors, do you use a validation query for your server connection? Please use one if you don’t
chico:
Also, does anyone know how to prevent / fix the timestamp error for MySQL?
Just allow null values in those date/timestamp columns. Also those columns tend to have default values set to ‘0000-00-00 00:00:00’, get rid of that too.
We are allowing null values but for some reason, we still keep getting the errors about the ‘0000-00-00…’ values even though we’ve gone through each table to make sure no timestamp columns have that value.
Of course, we could be missing something as the errors continue.
Is there any way to get WHAT table the errors are occurring in on the server? The error "java.sql.SQLException: Cannot convert value ‘0000-00-00 00:00:00’ from column 21 to TIMESTAMP. " is pretty useless.
We have solved our 0 date issue and that has been helping our server, but today we got more spikes on our MySQL server and customers complaining of slow downs.
I’ve provided a dump for review.
Also, how should this be set if you’re getting about 85-95 clients connecting at a given time? (also, this will steadily grow higher)
Maximum connections active:
Maximum connections idle:
Maximum prepared statements idle:
looking at the stack dump doesn’t tell much, i even don’t see any query being done at that time
What i do see is that somehow you use quite a lot of memory, its not really that close yet (+/-900MB used and a max of +/-1000MB)
Are there webclients? or are those all smart clients?
max connections i would set on 50, idle around 10, prepared idle can be 100
But it would be good to know what does use so much memory, but for that you need to create stackdumps (thats not that easy, you have to use jvisualvm that comes with the JDK6 installation and so on)
System Information
Heap memory: allocated=1040512K, used=964087K, max=1040512K
None Heap memory: allocated=89632K, used=82025K, max=184320K
So you’re saying that we have it set too high? Or that too much is being used and you want to know why?
Also, I don’t understand Max db connections. You suggest 50 and we have 100 set. We have 75-85 clients at this time. I take it 1 client != 1 db connection?
The memory setting is not to high, it is almost to low (because used and max are close…)
The max connections don’t have to be the number of clients, You can do that (and you do need it if you use long transactions)
But for normal use a client holds on to a connection a very short time, so they are shared.
Having many connections and a big idle pool can result in large memory usage, i have seen for example oracle dumps that the driver of oracle did hold on to almost 1GB of memory…