Thursday, 6 August 2009

Weblogic JMS Performance Tuning Tips

Here are a couple of real-life tips on tuning JMS performance.

Problem:
The creates and sends JMS messages on an outgoing queue for consumption by another application.

It was observed that when the consuming application was offline for a period of time the number of messages that could be retained on the queue before the JVM heap was filled up was quite low. This was tested to be roughly 7000 messages, after which OutOfMemory exceptions begin to occur. Given that the consuming application could realistically be offline for a period, hence the use of an asynchronous queue, we needed to increase the number of messages that could realistically be stored in the queue.

The exception we get is shown below:


Start server side stack trace:
java.lang.OutOfMemoryError:

Start server side stack trace:
java.lang.OutOfMemoryError
<<no stack trace available>>
End server side stack trace
at weblogic.rmi.internal.BasicOutboundRequest.sendReceive(BasicOutboundRequest.java:109)
at weblogic.rmi.internal.BasicRemoteRef.invoke(BasicRemoteRef.java:127)
at weblogic.jms.dispatcher.DispatcherImpl_WLStub.dispatchSyncFuture(Unknown Source)
at weblogic.jms.dispatcher.DispatcherWrapperState.dispatchSync(DispatcherWrapperState.java:286)
at weblogic.jms.client.JMSSession.createProducer(JMSSession.java:1484)
at weblogic.jms.client.JMSSession.createSender(JMSSession.java:1335)
...


The GC logs also show frequent Full GCs before the server goes out of memory.


Solution Steps:


1. Enabling JMS Paging

Paging had not been enabled for the queue. Despite this queue being persistent, this meant that every message was stored in the JVM memory heap in its entirety. Enabling message paging for this queue means that only the headers for paged messages are kept in memory, significantly reducing the amount heap utilized.

As the messages were being persisted via a JDBCStore to a database, this functioned as a paging store as well, however a FileStore must still be specified as the paging store for the JMS Server, or the JMS Server will not deploy at WLS server start time. This would be due to the need to cater for any non-persistent destinations when paging is enabled. If non-persistent messages are not paged, the size of this FileStore will be negligible. Paged messages still occupy some space on the memory heap as the message headers are still kept in memory.

On enabling paging for the specific queue, the test could cater for roughly 15000 messages before OutOfMemory exceptions occurred. The point at which paging began was set deliberately low to 100. Recovering from the page store does incur a certain performance cost, so in Production this was set to a more reasonable number based on the peak number of messages expected in the queue under normal conditions.

Despite the gain in number of messages that could be catered for, the heap utilisation graphs were very similar to those before paging was enabled. This showed no minor GCs, only full GCs at fairly frequent intervals.


2. JVM Settings and Garbage Collection Tuning

The untuned JVM heap size was 512Mb. This value could be increased, but test results after tuning the JVM settings indicate that this was probably more than adequate.

Examining the current JVM settings uncovered some settings that needed to be changed.

The most significant issue with the JVM settings was the NewSize value. This was set very high to 384Mb out of the total heap of 512 Mb.
A reasonable New Generation area would normally be 20-25% of the total heap size and setting it larger than the Tenured Generation area is guaranteed to cause unhealthy GC operations. In addition, it is good practice to use NewRatio rather than NewSize to avoid fixing an absolute size. A NewRatio of 3 (1:3, i.e. 25% of heap) or 4 is considered the most appropriate for WebLogic Server applications. NewSize was therefore dropped and a NewRatio was set to 4 (20% of total heap).

The SurvivorRatio value was set to a reasonable value of 3.
TargetSurvivorRatio, however, was unset meaning that the default of 50 applies. 80 would probably be a better setting, meaning that the switch between survivor spaces in the JVM heap would occur at 80% rather than 50%. The performance improvement from this should be noticeable in the frequency of minor GC, though not dramatic.

The PermSize values were high, with PermSize and MaxPermSize both set to 384Mb. These were reduced to 64Mb and 128Mb respectively, which should be more than adequate. Though these changes are unlikely to improve performance, having the PermSize set to high will needlessly consume memory.

The effect of setting a good NewRatio value was dramatic. Many minor GCs were the rule, with infrequent full GCs. 60,000 messages were added to the queue before the heap was approaching full. We ran an overnight, and somewhere between 60,000 and 70,000 messages the OutOfMemory exceptions occurred.

Scaling this up to the Production environment which has 1Gb Heap, shows that the server could easily cater for the expected load.

Monday, 20 July 2009

Dave Matthews Band - Big Whiskey and the Groogrux King

My favourite group, Dave Matthews Band released their 7th studio album - "Big Whiskey and the Groogrux King". I feel this is their best album since 1998's "Before These Crowded Streets" with them going back to their jamming style of music rather than the manufactured pop sounds of recent albums.
The name is a tribute to founding member and saxophonist LeRoi Moore who died in August 2008 and his nickname was Groogrux. A few of the songs on the album have references to LeRoi and show that the band really miss their mate. The cover portrays Moore's head on a float in the Mardi Gras procession in New Orleans.





The album begins with a short and lovely saxphone solo "Grux" - recorded by LeRoi before his death.

This is the prelude to the first song "Shake Me Like a Monkey" which is one of the outstanding tracks. The lyrics exhort a babe so hot "that God and the devil alone could not have made you up. Two must have worked as one" to "Love Me Baby Shake Me Like a Monkey". and this gem "I like my coffee with toast and jelly. But I'd rather be licking from your back to your belly". Very peppy and a kick-ass blast this song is.

"Funny the Way It Is" is a poem showing the same events has opposite effects on different people. "Hear the laughter while the children play war" and "Funny the way it is, If you think about it, One kid walks ten miles to school, another's dropping out"
The words remind me of Alanis Morissette's "Ironic" and the tune is simple and catchy with subtle violin at the start and a full blooded violin interlude in the middle with strong rock guitar riffs. I think this song also shows the band misses LeRoi with this line "Somebody's heart is broken and it becomes your favorite song". I dont like the video too much though.


The third song "Lying in the Hands of God" is a slow ballad with a simple arrangement. Hummable with reflective lyrics. This song grows on you with it's slow guitar rounding off - and you can picture yourself looking into a fading sunset with a drink in hand.

Then there's another decibel upping blast "Why I Am" - starts off with a full-on drums and guitar rolls and is definitely again a tibute to LeRoi "Why I am still here dancing with the GrooGrux King, We’ll be drinking Big Whiskey while we dance and sing" and you do want to tap along with the dance. This is surely an instant hit.

"Dive In" is my 2nd favourite song on the album - the one I loved as soon I heard it. It starts on a slow note but Dave sounds hypnotical in the lilting tune singing an ode to all things Summer - Summer games, Summer breeze, Summer girls, Summer dresses - "Kick off your shoes, dive in the empty ocean". "Wake up sleepy head, I think the suns a little brighter today". It surely is guys - after listening to this - my day has brightened. Wake up and enjoy it while it lasts cause it wont last forever - is the message here.


"Spaceman" is an okay song compared to the others. If it were on one of the weaker earlier albums, Busted Stuff [2002] or Stand Up [2005], this might have been rated higher but for me, it just makes up the numbers on this album. the lyrics are about a guy's love for his girl, even when things dont work out.

"Squirm" is quite unusual and surprising. It starts off slow but punches you when Dave gets a bit menacing and the crescendo builds as he sings "Drumbeats louder (x 4)". He gets ferocious on the "Open up your head" chorus. Parts of it almost feels like a song by U2, I can imagine Bono cooing in falsetto between the climactic violin and drum rolls, and it blends in what sounds quite Middle Eastern wailing for the finale. There's much more meaning to these lyrics with it's referring to (lack of) Faith and the differences in Religions and the inevitable End. But I love the song for it's music rather than it's philosophy.

"Alligator Pie (Cockadile)" is my favourite song on the album. It's arranged chaos is sublime - listen closely for the dog barking just before the violin in the beginning and Dave's own howling bringing in the vocals. This song will get your feet tapping with the banjo driving the entire melody. On the face of it, this is a song for his daughter Stella asking "Daddy when you going to put me in a song" since many earlier songs have referenced her twin sister Grace. (Yup, even I dont get why didnt her put her in a song earlier) But Dave's also pushes in a critical reference to the Bush administration's response to the Hurrican Katrina devestation of New Orleans. The song proceeds into a lovely cacophony with Dave almost yelping through the words. My only complaint is it ended too soon at 3:57 mins. The title word "cockadile" is the child's pronunciation of "crocodile" but I did not hear the word used in the song. Must be an in-family joke.

"Seven" begins with a heavy guitar sound almost like Pearl Jam and then moves into another jamming event. Most of the song is in falsetto with plenty of naughty lyrics. I wonder if the title itself alludes to The Seven Minutes- the novel by Irving Wallace. But it's fun to hear this all the same - and you'll start humming it in no time.

"Time Bomb" begins so similar to "The Dreaming Tree" from "Before These Crowded Streets". Soft alto saxophone and ballady tune "I'm a ticking time bomb Waiting to blow my top" and - Woah it explodes into a powerful metal guitar as if Metallica guested on this after 2:40 mins. Dave's shouting out angrily and covered by a strong trumpet "I want to pick up the pieces, Hammer in the final nail, I wanna believe in Jesus". Yes this is a dark song about death and killing and the purpose of it all. This song echoes very strongly in your heart if you have experienced loss - either by death or a failed relationship.
There are some lyrics which possibly refer to the death pf Dave's older sister, Anne, who lived in South Africa, was murdered by her husband, who subsequently committed suicide, on or around January 27, 1994. "No one would believe it
He was such a normal guy
Shake their heads and wonder why?
"
I feel the subtext here is on the existence of God and whether we are controlled by a higher power "If Martians fell from the sky what would that do to God? ".

When you're recovering from that rant, the soft guitar chimes in like a refreshing glass of chilled water and thats the opening of "My Baby Blue". This is the best ballad on the album - a love song dedicated to the one true love who he seems to have lost "Holding tight to the legs of all your angels, Goodbye my love ". Talking about the blue eyes, I feel though this is about loss, it's not dedicated to LeRoi, and there are some forums suggesting this is about his sister Anne's death. In a way this seems like he is at peace with the loss that was experienced in "Time Bomb" and I personally feel this is a follow-on to Time Bomb. This is an 'easy listening' track for Pop fans.

"You and Me" is another simple and easy ballad. This is probably a dedication to his wife - "When the kids are old enough we're gonna teach them how to fly. You and me together, We can do anything baby". This is mostly in acoustic guitar and rounds off the album on a soft note.


Yes - This is a lovely collection and took some time coming.

Cheers to Dave, LeRoi, Stefan Lessard, Carter Beauford, Boyd Tinsley, Jeff Coffin and Tim Reynolds with a Big Whiskey in hand.

Monday, 13 July 2009

JVM Tuning from the Trenches

This article is a follow-up to http://jojovedder.blogspot.com/2009/06/slow-weblogic-part-6-jvm-heap-analysis.html. Please read that one first for the basics on JVM heap, parameters and flags.

Also remember these tips have worked for the server settings and issues faced below, but blindly using these on your server will not produce the same results. You have to measure and then tune after measuring.

Problem:
Platform running Weblogic 8.1 on Sun V880 servers. Total RAM of 32 Gb on the machine.
2 Gb assigned to the managed server JVM heap. JDK 1.4

Initial settings:
-XX:+AggressiveHeap -Xms2048m -Xmx2048m  -XX:SurvivorRatio=32 -XX:MaxPermSize=128m 


But still there are 20 Full GCs per hour in peak times, before the server crashes.


Analysis

1. It was decided to reduce the SurvivorRatio to 4 and restart with some more flags.

The size of ONE Survivor Space is calculated as

SurvivorSpace = NewSize / (SurvivorRatio + 2)

Keeping SurvivorRatio as 32 means the Survivor spaces are too small for promoting stuff from Eden. Hence we reduce this to 4 which allows for larger Survivor spaces.

2. As per Sun Bug ID: 6218833, setting AggressiveHeap set before Heapsize (Xmx and Xms) can confuse the JVM. Revert the order to have -Xms and -Xmx to come before -XX:+AggressiveHeap or not use it

3. The application has 180+ EJBs with pools of beans. Hence set the -Dsun.rmi.dgc.client.gcInterval=3600000 (1 hour) instead of the default 60000 (1 min). More on this here: http://docs.sun.com/source/817-2180-10/pt_chap5.html

4. The site is restarted once a week at 4:30AM. The patterns stays normal for 2 days – and then degrades into full GC.

5. The Old space is pretty much full – at every minor collection – the Old space must be cleared up for promotion from Young to Old to take place.

6. Permanent space is pretty much full – keeps loading classes and classes ( could that be a problem – the difference between the number of JSP’s per Release?)
Hence we increased the PermSpace from 128M to 256M

7. Ensure we are running the server JVM by using the -server flag

8. Use OptimizeIt or similar profiling tool to see the memory usage and find code bottlenecks.


The settings now were

-server -Xms2048m -Xmx2048m  -XX:MaxNewSize=512m -XX:NewSize=512m -XX:SurvivorRatio=4 -XX:MaxPermSize=256m -Xincgc -XX:+DisableExplicitGC -XX:+AggressiveHeap -XX:-OmitStackTraceInFastThrow



This reduced the Full GCs to one a day.

Error Logs

At the time of the server going out of memory prior to a crash, the logs are filled with repeated errors (up to 100 repetitions) of this sort

java.lang.NullPointerException
 <<no stack trace available>>


Adding the -XX:-OmitStackTraceInFastThrow flag resolves this problem, the root cause of the NPE it self has to be tracked down but we do not have any longer the issue of huge recursive exception strings.

We could now see the stack trace as

java.lang.NullPointerException
 at java.util.StringTokenizer.(StringTokenizer.java:117)
 at java.util.StringTokenizer.(StringTokenizer.java:133)
 at jsp_servlet._framework._security.__login._jspService(login.jsp:294)
 at weblogic.servlet.jsp.JspBase.service(JspBase.java:27)
 at weblogic.servlet.internal.ServletStubImpl$ServletInvocationAction.run(ServletStubImpl.java:1075)


This seems to be a Sun bug described here.

Tuesday, 7 July 2009

More on Weblogic Muxers

This is a follow-up to an earlier article which detailed the Weblogic Socket Muxer pattern in thread dumps.


Another ERROR pattern associated with Muxers is shown below


####<Jun 23, 2009 9:25:24 AM BST> <Error> <Socket> <d001> <managed1> <ExecuteThread: '0' for queue: 'weblogic.socket.Muxer'> <<WLS Kernel>>
<BEA-000403> <IOException occurred on socket: Socket[addr=/10.23.22.1,port=1950,localport=61002]
java.net.SocketException: Connection refused.
java.net.SocketException: Connection refused
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.read(SocketInputStream.java:129)
at weblogic.socket.SocketMuxer.readReadySocketOnce(SocketMuxer.java:672)
at weblogic.socket.SocketMuxer.readReadySocket(SocketMuxer.java:648)
at weblogic.socket.PosixSocketMuxer.processSockets(PosixSocketMuxer.java:123)
at weblogic.socket.SocketReaderRequest.execute(SocketReaderRequest.java:32)
at weblogic.kernel.ExecuteThread.execute(ExecuteThread.java:219)
at weblogic.kernel.ExecuteThread.run(ExecuteThread.java:178)



As written in an earlier post, each incoming request on a TCP socket uses one File Descriptor.

We monitored the File Descriptors available to this Weblogic server using



Here the Weblogic pid = 8359

/usr/proc/bin/pfiles 8359 | grep rlimit


1024

ls /proc/8359/fd | wc –l


140



The first command shows that 1024 FDs are allocated to this process, and the second shows how many are being used in real-time.

A plot of the second value during high load times shows the FDs go up to 900+ in use and hence a resolution used in this case is to increase the number of FDs.

A recommended value is to increase it to 8192.

Monday, 15 June 2009

Slow Weblogic Part 6 - JVM Heap Analysis using GCViewer

Basics


In an earlier article, I had listed the review of JVM memory parameters as one of the important checks for tuning the JEE server platform.

The basic primer for JDK 1.4 is at http://java.sun.com/docs/hotspot/gc1.4.2/


The key points you need to know are:

Total JVM Heap = Young + Tenured(also called Old)


Young = Eden + From (SS1) + To (SS2)




In the diagram below [taken from the Sun website], "From" and "To" are the names of the two Survivor Spaces (SS) within the "Young".

Perm Space (and code cache): stores JVM’s own stuff. This is outside the Heap you assign using Xms and Xmx. A good explanation of this is available here

The JVM Heap is at default initial 2Mb and max 64Mb (for JDK 1.4 on Solaris).
Default Perm Size is 16MB (for JDK 1.4 on Solaris)
The defaults change for each JDK and are different on each OS - so look up the values on the respective websites.






The ratios are as shown below






Now the object life cycle and garbage collection occurs like this:

1. Objects when created are always first allocated to Eden.
2. When Eden fills up, a fast but not comprehensive GC (minor collection) is run over the young generation only.
3. All surviving objects are moved from Eden into one Survivor Space.
4. In consequent minor collections, new objects move from Eden into the other Survivor Space, plus everything from the first Survivor Space (survivors from the previous minor collection) is also moved into the second Survivor Space. Thus one survivor should be empty at that time.
5. When objects in Survivor Space are old enough (or survivor fills up), they are moved to Tenured. By default the long-lived objects may be copied up to 31 times between the Survivor Spaces before they are finally promoted to the Old generation.
6. When tenured fills up, a Full GC collection is run that is comprehensive: the entire heap is analyzed, all objects that can be destroyed are killed and memory is reclaimed.

Note: the above lifecycle changes slightly when advanced options such as ConcurrentMarkSweep etc are enabled.

Look Closer

What do these values mean ?

A full list of options available at http://java.sun.com/docs/hotspot/VMOptions.html and http://java.sun.com/javase/technologies/hotspot/vmoptions.jsp


The absolute basic ones are listed in the table below. Note: This is for JDK 1.4
Some of these have changed in JDK 1.6











-Xms1536m -Xmx1536mThese represent the total heap (minus Perm space). Xms is the Initial Heap, set to 1.5Gb in this case. Xmx is Max Heap. It is good practice to set Xms = Xmx
The max heap is limited by the RAM available on the server
-XX:NewSize=512m This specifies the initial size of the Young generation,set to 512Mbin this example. It is better to set this as a percentage of the Heap using -XX:NewRatio
-XX:MaxNewSize=512mThis specifies the maximum size of the Young generation,set to 512Mbin this example. It is better to set this as a percentage of the Heap using -XX:MaxNewRatio
-XX:PermSize=64m -XX:MaxPermSize=128mThese values are the Minimum and Maximum sizes of the permanent generation heap space. Optimally, set PermSize equal to MaxPermSize to avoid heap having to be adjusted when permanent area grows. As specified earlier, this area of memory is over and above the Total Heap set using Xms
-XX:SurvivorRatio=8 -XX:TargetSurvivorRatio=90The New generation area is divided into three sub-areas: Eden, and two survivor spaces that are equal in size. Use the -XX:SurvivorRatio=X option to configure the ratio of the Eden/survivor space size. In the above example, setting it to 8 means the ratio of Eden:SS1:SS2 is 8:1:1. So for a NewSize of 512 Mb, the two SS will be 51 Mb each, and Eden will be 512 MINUS (51 + 51) = 410 Mb.
TargetSurvivorRatio of 90 allows 90% of the survivor spaces to be occupied instead of the default 50%, allowing better utilization of the survivor space memory.
-XX:MaxTenuringThreshold=10This switch determines how many times the objects are hopped between the Survivor spaces before getting promoted to the older generation. The default value is 31.
-XX:+DisableExplicitGC
-XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintTenuringDistribution -XX:+PrintGCApplicationConcurrentTime -XX:+PrintGCApplicationStoppedTime -Xloggc:/log/gc.log
These are GC specific settings asking for GC details and the log file name in which these details should be captured


If you have an appetite for more, read this http://java.sun.com/performance/reference/whitepapers/tuning.html

GCViewer

This link below explains how to download and use the GCViewer tool. This is quite a useful tool for viewing the number of GCs and Full GCs and how the JVMis behaving over time.

http://www.javaperformancetuning.com/tools/gcviewer/index.shtml

The most important things to look at in the GCViewer analysis are the

* Acc Pauses - Accumulated Pause Time (total time app was stopped for GC).Pauses are the times when an application appears unresponsive because garbage collection is occurring
* Total Time - Total Time the application runs.
* Throughput - Time the application runs and is not busy with GC. Greater than 99% is fantastic.Throughput is the percentage of total time not spent in garbage collection, considered over long periods of time.
* Footprint - Overall Memory Consumption - Ideally as low as possible. This is the working set of a process, measured in pages and cache lines. On systems with limited physical memory or many processes, footprint may dictate scalability. Thus this usually reflects the size of total Heap allocated via Xms and Xmx

This diagram is taken from the above site:



Tuning Example From the Trenches - Frequent GC due to Perm Space getting Full

JVM Parameters already set

java -server -Xms1024m -Xmx1024m -XX:MaxPermSize=340m -XX:NewSize=340m -XX:MaxNewSize=340m
-XX:SurvivorRatio=9 -XX:TargetSurvivorRatio=90 -XX:+UseParNewGC
-Xloggc:/wls_domains/gclog/jms.gc -XX:+PrintGCDetails -XX:+UseParNewGC
-XX:+PrintGCTimeStamps -XX:+DisableExplicitGC -XX:+PrintTenuringDistribution
-XX:+PrintGCApplicationConcurrentTime -XX:+PrintGCApplicationStoppedTime
-XX:+JavaMonitorsInStackTrace -Dweblogic.system.BootIdentityFile=/wls_domains/xx/boot.properties
-Dweblogic.management.server=http://xx.xx.xx.xx:6000 -Dweblogic.Name=xxxxxx
-Dweblogic.ProductionModeEnabled=true
-Djava.security.policy=/opt/bea/wls/8.1sp4/weblogic81/server/lib/weblogic.policy
weblogic.Server


Full GC Pattern

The GC log shows


0.000: [Full GC 0.000: [Tenured: 0K->9794K(700416K), 0.8050952 secs] 134607K->9794K(1016960K), [Perm
: 20479K->20479K(20480K)], 0.8053527 secs]


First Full GC happens at 0:00 sec from start of server. Before and after '->' figures represent size of live objects before and after GC. Number in parenthesis indicates total available space. So from above numbers - Tenured was not full but Perm was almost full.



6579.013: [Full GC 6579.013: [Tenured: 9794K->18941K(700416K), 0.9677233 secs]
155600K->18941K(1016960K), [Perm : 24575K->24575K(24576K)], 0.9679896 secs]


Same thing happens here. The second Full GC took place at 6579.013 sec (1hr 49mins) from startup of server.
Again a Full GC is triggered, but the Tenured was not full. The Tenured is now 18.9 Mb out of 700 Mb - but it is seen the Perm Space has grown to 24.5 Mb and is not getting cleared.


9363.515: [Full GC 9363.516: [Tenured: 18941K->19463K(700416K), 0.6532332 secs] 36950K->19463K(1016960K), [Perm : 28672K->26462K(28672K)], 0.6536095 secs]



At the 3rd Full GC at 9363 seconds after server startup, the Perm space grew to 28.6 Mb and recovered marginally to 26.4 Mb.

Observing this over a long period of time, we concluded that at startup around 20MB is allocated to perm which with each Full GC keeps growing till 30MB and later shrinks back to 25MB and the cycle continues.

The pattern is highlighted below:


0.000: [Full GC 0.000: [Tenured: 0K->9794K(700416K), 0.8050952 secs] 134607K->9794K(1016960K), [Perm : 20479K->20479K(20480K)], 0.8053527 secs]
6579.013: [Full GC 6579.013: [Tenured: 9794K->18941K(700416K), 0.9677233 secs] 155600K->18941K(1016960K), [Perm : 24575K->24575K(24576K)], 0.9679896 secs]
9363.515: [Full GC 9363.516: [Tenured: 18941K->19463K(700416K), 0.6532332 secs] 36950K->19463K(1016960K), [Perm : 28672K->26462K(28672K)], 0.6536095 secs]
13483.233: [Full GC 13483.233: [Tenured: 19463K->16962K(700416K), 0.9783693 secs] 26678K->16962K(1016960K), [Perm : 30719K->21330K(30720K)], 0.9857390 secs]
17308.829: [Full GC 17308.830: [Tenured: 16962K->17312K(700416K), 1.0578872 secs] 88025K->17312K(1016960K), [Perm : 25600K->25600K(25600K)], 1.0581738 secs]
21237.810: [Full GC 21237.810: [Tenured: 17312K->17814K(700416K), 1.4728764 secs] 302290K->17814K(1016960K), [Perm : 29695K->26719K(29696K)], 1.4801234 secs]
30079.672: [Full GC 30079.672: [Tenured: 17814K->18676K(700416K), 1.0282446 secs] 83159K->18676K(1016960K), [Perm : 30975K->27564K(30976K)], 1.0349869 secs]




Solution:
Though the MaxPermSize=340m is provided, initial available Perm Size is getting full and JVM is invoking a Full GC to free up memory. The default initial PermSize is 16Mb and hence the Perm Space is resizing itself as the JVM grows.

Not sure if this behaviour is a bug but by adding an initial PermSize of 64K using this flag -XX:PermSize=64m resolved this issue of frequent Full GC.


UPDATE: Another examples published on http://jojovedder.blogspot.com/2009/07/jvm-tuning-from-trenches.html

Sunday, 17 May 2009

Weblogic - Socket Muxers in Thread Dumps

What are these weblogic.socket.Muxer threads seen in thread dumps ?

Note: for a basic primer on taking thread dumps and analyzing them, see this earlier article

Socket Reader Threads accept the incoming request from the Listen Thread Queue and put it on the Execute Thread Queue.

In WL 8.1, there are 3 socket reader threads by default.
In WL 9 and 10, WebLogic allocates 33% of server threads to act as socket readers by default. This need not be changed usually.

One socket reader thread is usually in the poll function, while the others are available to process requests.
The polling thread is highlighted in the thread dump below.


"ExecuteThread: '2' for queue: 'weblogic.socket.Muxer'" daemon prio=5 tid=0x016b2148 nid=0x42 waiting for monitor entry [5997f000..5997fc28]
at weblogic.socket.PosixSocketMuxer.processSockets(PosixSocketMuxer.java:91)
- waiting to lock <0x94846b40> (a java.lang.String)
at weblogic.socket.SocketReaderRequest.execute(SocketReaderRequest.java:32)
at weblogic.kernel.ExecuteThread.execute(ExecuteThread.java:219)
at weblogic.kernel.ExecuteThread.run(ExecuteThread.java:178)


"ExecuteThread: '1' for queue: 'weblogic.socket.Muxer'" daemon prio=5 tid=0x00683c28 nid=0x41 waiting for monitor entry [59a7f000..59a7fc28]
at weblogic.socket.PosixSocketMuxer.processSockets(PosixSocketMuxer.java:91)
- waiting to lock <0x94846b40> (a java.lang.String)
at weblogic.socket.SocketReaderRequest.execute(SocketReaderRequest.java:32)
at weblogic.kernel.ExecuteThread.execute(ExecuteThread.java:219)
at weblogic.kernel.ExecuteThread.run(ExecuteThread.java:178)


"ExecuteThread: '0' for queue: 'weblogic.socket.Muxer'" daemon prio=5 tid=0x0079e5b0 nid=0x40 runnable [59b7f000..59b7fc28]
at weblogic.socket.PosixSocketMuxer.poll(Native Method)
at weblogic.socket.PosixSocketMuxer.processSockets(PosixSocketMuxer.java:100)
- locked <0x94846b40> (a java.lang.String)
at weblogic.socket.SocketReaderRequest.execute(SocketReaderRequest.java:32)
at weblogic.kernel.ExecuteThread.execute(ExecuteThread.java:219)
at weblogic.kernel.ExecuteThread.run(ExecuteThread.java:178)



In an earlier support case on Stuck Threads, we asked BEA:

Should we worry about the Weblogic.socket.Muxer threads which always show 2 threads waiting for lock and 3rd thread locking the same object?

The Muxer TD is attached. This shows same behaviour on all our Weblogic servers.


Full thread dump Java HotSpot(TM) Server VM (1.4.2_05-b04 mixed mode):

"ExecuteThread: '2' for queue: 'weblogic.socket.Muxer'" daemon prio=5 tid=0x0151
c588 nid=0x1b4 waiting for monitor entry [ad57f000..ad57fc28]
at weblogic.socket.PosixSocketMuxer.processSockets(PosixSocketMuxer.java:91)
- waiting to lock <0xd9331760> (a java.lang.String)
at weblogic.socket.SocketReaderRequest.execute(SocketReaderRequest.java:32)
at weblogic.kernel.ExecuteThread.execute(ExecuteThread.java:219)
at weblogic.kernel.ExecuteThread.run(ExecuteThread.java:178)



"ExecuteThread: '1' for queue: 'weblogic.socket.Muxer'" daemon prio=5 tid=0x0161
d608 nid=0x1b3 runnable [ad67f000..ad67fc28]
at weblogic.socket.PosixSocketMuxer.poll(Native Method)
at weblogic.socket.PosixSocketMuxer.processSockets(PosixSocketMuxer.java:100)
- locked <0xd9331760> (a java.lang.String)
at weblogic.socket.SocketReaderRequest.execute(SocketReaderRequest.java:32)
at weblogic.kernel.ExecuteThread.execute(ExecuteThread.java:219)
at weblogic.kernel.ExecuteThread.run(ExecuteThread.java:178)



"ExecuteThread: '0' for queue: 'weblogic.socket.Muxer'" daemon prio=5 tid=0x01bb
6730 nid=0x1b2 waiting for monitor entry [ad77f000..ad77fc28]
at weblogic.socket.PosixSocketMuxer.processSockets(PosixSocketMuxer.java:91)
- waiting to lock <0xd9331760> (a java.lang.String)
at weblogic.socket.SocketReaderRequest.execute(SocketReaderRequest.java:32)
at weblogic.kernel.ExecuteThread.execute(ExecuteThread.java:219)
at weblogic.kernel.ExecuteThread.run(ExecuteThread.java:178)



The reply from BEA Support was that the above pattern of weblogic.socket.Muxer threads are not a cause of stuck threads.


Why do they mostly show as being Stuck in Samurai TD analyzer ?




As the image shows, when you analyze thread dumps using Samurai, the muxer threads are shown as being Stuck since they're all locked on the same object. This is probably treated as a deadlock condition.


"ExecuteThread: '2' for queue: 'weblogic.socket.Muxer'"
- waiting to lock <0xd9b61098> (a java.lang.String)


"ExecuteThread: '1' for queue: 'weblogic.socket.Muxer'"
- waiting to lock <0xd9b61098> (a java.lang.String)


"ExecuteThread: '0' for queue: 'weblogic.socket.Muxer'"
- locked <0xd9b61098> (a java.lang.String)



But you will see the same in any Thread dump even on a development instance with no requests.
The locks mentioned do show up as red in Samurai - but they aren't deadlocks just regular locks.

A thread gains an exclusive lock on an object to perform some action, then frees it allowing the next thread to gain access.

Additionally, if you look at the thread dumps over time, you'll see that these specific locks are not always present - they are moving between the threads which is indicative of their transitory nature.



I want to know more details on Muxers


The socket Muxer manages the server’s existing socket connections.
It first determines which sockets have incoming requests waiting to be processed. It then reads enough data to determine the protocol and dispatches the socket to an appropriate runtime layer based on the protocol.
In the runtime layer, the socket muxer threads determine which execute thread queue to be used and delegates the request accordingly.

From the documentation on http://edocs.bea.com/wls/docs100/perform/WLSTuning.html#wp1152246 ,
Weblogic has two versions of the socket muxer, one is the Java version and the other uses a native library which makes better use of operating system calls. The Enable Native IO checkbox on the server’s configuration settings tells the server which version to use. This is ON by default for most platforms.

Native muxers provide superior scalability because they implement a non-blocking thread model. When a native muxer is used, the server creates a fixed number of threads dedicated to reading incoming requests. Oracle recommends using the default setting of true for the Enable Native IO parameter which allows the server to automatically select the appropriate muxer to use.

You must ensure that to use Native I/O, the native library must be present in the server’s shared library path . This is set up with the default scripts.
When the server does not find the native library, it throws an error
java.lang.UnsatisfiedLinkError: no muxer in java.library.path
and then loads the Java version of the muxer.

Confirm the LD library path is okay and pointing to the Solaris LD path. Check the startup log when starting a managed server. What is the value of java.library.path?
This is where the JVM actually get's the library from.

http://m-button.blogspot.com/2008/08/how-does-weblogic-handle-socket-muxers.html has a good example of how to identify Native vs Java muxer in a thread dump.

The Thread Dump I’ve used in my examples above uses the Native muxer (weblogic.socket.PosixSocketMuxer) on Solaris.

Solaris has another Native muxer called the weblogic.socket.DevPollSocketMuxer
An example TD using this muxer is shown below.


"ExecuteThread: '4' for queue: 'weblogic.socket.Muxer'" waiting for lock java.lang.String@4edf4f BLOCKED
weblogic.socket.DevPollSocketMuxer.processSockets(DevPollSocketMuxer.java:95)
weblogic.socket.SocketReaderRequest.run(SocketReaderRequest.java:29)
weblogic.socket.SocketReaderRequest.execute(SocketReaderRequest.java:42)
weblogic.kernel.ExecuteThread.execute(ExecuteThread.java:145)
weblogic.kernel.ExecuteThread.run(ExecuteThread.java:117)



"ExecuteThread: '3' for queue: 'weblogic.socket.Muxer'" RUNNABLE native
weblogic.socket.DevPollSocketMuxer.doPoll(Native Method)
weblogic.socket.DevPollSocketMuxer.processSockets(DevPollSocketMuxer.java:96)
weblogic.socket.SocketReaderRequest.run(SocketReaderRequest.java:29)
weblogic.socket.SocketReaderRequest.execute(SocketReaderRequest.java:42)
weblogic.kernel.ExecuteThread.execute(ExecuteThread.java:145)



weblogic.kernel.ExecuteThread.run(ExecuteThread.java:117)
"ExecuteThread: '2' for queue: 'weblogic.socket.Muxer'" waiting for lock java.lang.String@4edf4f BLOCKED
weblogic.socket.DevPollSocketMuxer.processSockets(DevPollSocketMuxer.java:95)
weblogic.socket.SocketReaderRequest.run(SocketReaderRequest.java:29)
weblogic.socket.SocketReaderRequest.execute(SocketReaderRequest.java:42)
weblogic.kernel.ExecuteThread.execute(ExecuteThread.java:145)
weblogic.kernel.ExecuteThread.run(ExecuteThread.java:117)



"ExecuteThread: '1' for queue: 'weblogic.socket.Muxer'" waiting for lock java.lang.String@4edf4f BLOCKED
weblogic.socket.DevPollSocketMuxer.processSockets(DevPollSocketMuxer.java:95)
weblogic.socket.SocketReaderRequest.run(SocketReaderRequest.java:29)
weblogic.socket.SocketReaderRequest.execute(SocketReaderRequest.java:42)
weblogic.kernel.ExecuteThread.execute(ExecuteThread.java:145)
weblogic.kernel.ExecuteThread.run(ExecuteThread.java:117)



"ExecuteThread: '0' for queue: 'weblogic.socket.Muxer'" waiting for lock java.lang.String@4edf4f BLOCKED
weblogic.socket.DevPollSocketMuxer.processSockets(DevPollSocketMuxer.java:95)
weblogic.socket.SocketReaderRequest.run(SocketReaderRequest.java:29)
weblogic.socket.SocketReaderRequest.execute(SocketReaderRequest.java:42)
weblogic.kernel.ExecuteThread.execute(ExecuteThread.java:145)
weblogic.kernel.ExecuteThread.run(ExecuteThread.java:117)





To change the number of Muxers from the default, follow the instructions given at http://e-docs.bea.com/wls/docs92/ConsoleHelp/taskhelp/tuning/TuningSocketReaders.html




See http://jojovedder.blogspot.com/2009/07/more-on-weblogic-muxers.htmlfor an update on Muxers

Additionally on Oracle JRockit JVMs - there are some information in the thread dumps which point out the same problem in a different manner.

After the normal stack dumps, BEA JRockit performs a deadlock detection. This is done by finding "lock chains" in the Java application. If a lock chain is found to be circular, the application is considered caught in a deadlock.

A detailed explanation of the 3 types of lock chains in JRockit is given here


What is relevant for us is the example of Muxers which are shown as:


Blocked lock chains
===================
Chain 2:
"ExecuteThread: '2' for queue: 'weblogic.socket.Muxer'" id=129 idx=0x218 tid=4079 waiting for java/lang/String@0x37804000 held by:
"ExecuteThread: '0' for queue: 'weblogic.socket.Muxer'" id=127 idx=0x210 tid=4077 in chain 1

Open lock chains
================
Chain 1:
"ExecuteThread: '1' for queue: 'weblogic.socket.Muxer'" id=128 idx=0x214 tid=4078 waiting for java/lang/String@0x37804000 held by:
"ExecuteThread: '0' for queue: 'weblogic.socket.Muxer'" id=127 idx=0x210 tid=4077 (active)


As per the explanation, the Open lock chain depicts Thread 1 waiting for Thread 0. This is not a deadlock, only a straight dependency.

Since Thread 0 is already part of the Open lock chain, the fact that Thread 2 is also waiting on the same Thread 0 is treated as a "Blocked lock chain".
In this case this is not a problem.

Update 15th Feb 2011

I'm glad this blog entry on weblogic muxers has made it onto the Oracle forums with a mention from James Bayer.

Tuesday, 12 May 2009

Slow Server Response Part 4 - Platform Checks and Action Plan



In the earlier parts of this article I described some technical tuning tips specific to a slow-performing Weblogic JEE server.

Another earlier post has looked at analyzing thread dumps and prstats when the Weblogic/JEE server consumes high CPU.




This article provides a sequence of actions a Support team should carry out when faced with a critical situation - the site is down / servers running high CPU - and management teams want quick updates - and an action plan.

So here is (from experience) the Rapid Action Plan:


Technical Checklist for the Platform

1. Start a diary and mark out each of the steps given below as well as any following action with the timestamp at which it was carried out and the result/status.

2. Get the relevant technical experts on a conference call + Netmeeting/LiveMeeting/Desktop Sharing/Remote Admin.

3. Does the application have any traps or thresholds set which are configured to automatically raise alarms to the Support teams? Have any of the traps set been exceeded ? eg: Server CPU, Memory utilization, No of Threads

4. Can we narrow down to a problem area - Web server, Application server, Database, OS - based on log files, error messages and Support team or User input.

If a particular Weblogic Managed Server is identified as a point of failure, does the Configuration allow that server process to be shutdown for a while - thereby reducing Cluster capability but still providing acceptable Quality of service.

Are all the Managed Servers running with equal threads? If not, this can cause a load balancing issue.

Sometimes the bottleneck can be the Web server plugin which is not able to properly load balance the requests across the Weblogic cluster. This is usually the case when users complain of loss of session, spontaneous logout etc. The problem can be the user has been bounced from one Weblogic server to another in the cluster - and the session might not be replicated across the servers.

Any redundant JDBC connection pools - i.e. those configured with a high capacity but monitoring shows they dont need that many.
Then reduce the Capacity of that pool so that it does not hold on to unnecessary connections on the database.

5. From the log files, identify whether a particular application or code area is causing an issue. eg: EJB throwing errors, Spring bean configuration missing.

6. Are the log files too large (> 500 Mb) or not getting rotated via Weblogic rotation policy or Unix archiving ?

7. Check the Downstream Back-end systems which the server connects to - via DBLink, Web service, XML/Http, JMS, HTTP screen scraping etc.? Any known issues or planned outages? There should be error logs pointing in that direction. Contact their support teams to know if their system is available and returning responses as per the SLA.

8. Can the problem be replicated on Reference/Test instances?
A Dev or Test team can in parallel try out to see whether the issue is replicable.

If Yes, is it code related or configuration related?

If the issue is not replicable, then can it be data related ? Perhaps a particular set of data exists on Production which is not on the Test instance - and that could be the problem. Can the data be brought into Test to try and replicate the problem ?

9. Can it be content related? Does the platform have a Content Management System? Is the link from the CMS to the server working or broken? Is the Content correctly getting deployed into the database + file system as per daily process?

Check if there was a content deployment carried out and any records whether they failed or passed. Is content deployment happening during the business timings, and utilizing system and CPU resources - which chokes the JEE server?

Can a resource-hungry content deployment process be moved to out-of-business hours.

10. Test broken user journeys on the site.

Can the problem be seen while running HttpHeaders, HttpAnalyzer, Fiddler etc ? Does it show any change in HTTP parameters such as Cookies, Session timeouts?
Compare these against the Test environment and see whether any mismatches which could cause the problem.
If there is bouncing of user sessions between managed servers, this will be visible in the weblogic JSessionID which will be different on the client browser.

11. What were the last few changes to the platform ?

Check latest release or configuration change as per Support Team Diary of Events. Could these have caused an issue and should these be rolled back?

Were these properly tested and signed off before going into Production.

eg: any new Database driver, changes to TCP parameters, JTA timeouts increased?

12. Check the last few support cases raised? See if there were any problems reported by business or end customers.

13. Solaris/OS checks

Is the platform running on the latest OS patch levels and JDK settings as recommended by Sun.

a. No of processes running. Use
ps -ef | wc -l

b. Ping the boxes, to check if they are alive

c. CPU utilization

prstat

d. Memory utilization

vmstat 3

Swap space utilization, amount of space in /tmp - is there any old file or core dump occupying the directory used as swap space. We once moved old EAR files from /tmp on the server; memory utilisation went from approx 90% down to 65%.

e. Disk space

df -ek

f. No of File descriptors


14. Weblogic/Web server checks

a. Thread utilization - any Stuck Threads

Analyze Thread dumps , at least 4 sets of Thread dumps taken 5 seconds apart when stuck thread is observed. See here for more details on what to look for in the thread dumps. Use Samurai or TDA

b. CPU %

c. Access and error logs - Any CRITICAL messages in the logs. Any Connection_Refused errors indicating the threads were not able to accept new requests.

d. No of open sockets to weblogic

netstat -a | grep

e. Memory utilization via Weblogic console

f. Check via console if all the managed servers are up and running

g. Connection pool utilization, Are they hitting the peak values.

h. Frequent Garbage collection shown in the console?
Frequency of GC, GC pattern. Has the JVM been tuned to allow optimum garbage collection. See this URL for more.

i. Check for the values in weblogic.xml for jsp-pageCheckSeconds and servlet-reload-check-secs - if these are at the default of 1, the server will check each second to see whether the JSP should be recompiled - this is horribly slow

j. Cron job logs - any failures.

k. No of weblogic sessions per server - the more the number of HttpSessions, the higher the memory (RAM) that gets used.

l. Is a large part of the application journey over SSL. When supporting the cryptography operations in the SSL protocol, WebLogic Server cannot handle as many simultaneous connections.
Typically, for every SSL connection that the server can handle, it can handle three non-SSL connections. SSL reduces the capacity of the server by about 33-50% depending upon the strength of encryption used in the SSL connections.
(Source: http://edocs.bea.com/wlp/docs92/capacityplanning/capacityplanning.html#wp1080286)
Consider reducing the SSL journeys on the site.

m. Disk space taken by Weblogic and other logs such as Log4J.
Is log4j running in DEBUG and writing out loads of logs ? This will also slow down the server horribly.

15. Database checks

a. SQL Server locks (Call out DBA)

b. Database stuck/locked processes

c. Any DB link down

d. Any issues with open cursors, cached cursors ?

e. Is the database running at very high Memory Utilization?


16. Search Engine processing - check the log for the day.


17. Any MIS such as Webtrends / Omniture Analysis - for application usage. Has there been a sudden rise in users on the site - eg a marketing campaign or a new feature gone live - causing a rise in usage which the infrastructure cannot cope with.

18. Any application cached data which was wiped out and took time to rebuild - causing slow service in the interim period. eg: is any database table with a lot of rows being cached.
Or conversely, is there incorrect data in a certain cache and will clearing the cache help ?

19. SMTP email delivery failures due to any problems on the OS ?

20. Any planned backup processes running on the OS which takes up a lot of CPU.



Remedial actions

1. Make a list of possible changes based on the above checks to address these problems.

2. Only change one setting on any system at a time. Test and record the desired effects and observed effects. Be clear on why a particular change is being made.

3. If it doesn't work rework the plan to get to root cause of failure.

4. Be aware that reactive changes will be made directly to the production
environment by various parties. Significant changes will be made purely to enable investigation and diagnosis of issues.
The lack of up-to-date documentation creates risk. Maintain a documented rationale for a particular design decision, configuration choice, or system parameter; this reduces the likelihood that mistakes will be repeated. Documentation is a key communication tool, without it intent may be miscommunicated within the team. If key staff members leave, knowledge will be lost to the extent that the platform may become unmanageable.

5. Add additional tests to the regression test
suite. Increase the coverage of the regression test suite, focussing on
simulating live system interaction.


6. Over the long term, identify and re-architect towards removing Single Points of Failure - such that loss of a single machine or process would not lead to a loss of service.

Examples:
· Single web server machine, hosting the Apache/SunOne instances.
· Single application server machine, hosting the Weblogic application server.
· Single database server instance.

The system runs at risk of lengthy service outage if any one of these components fails. If a hardware failure occurred and one of the servers was lost, alternative hardware would need be installed and initialised from back-up tapes.
This needs to be fixed and stabilized over the long term.

6. Medium to long term remedial action include code review. Use your tools of choice for Java. .NET, front end Angular/React or backend NodeJS  such as sonarQube, lint4j, sonarscanner for .NET, jsLint, eslint etc

7. While the analysis is going on, a member of Support team should circulate the KEY Metrics on an hourly basis to the TECHNICAL community. Ensure this is the important dataset and not too much info which just becomes noise.


Example in the table below:












Servers Server 1Server 2 Server 3
Idle Threads111417
JMSErrors1017
IOExceptions301
Stuck Threads007
JMSErrors1017
netstat -a | grep TIME_WAIT | wc -l 18628969
CPU Utilization (%)121.83.6
Memory Utilization (%)3.92.92.1
500 Internal Server Error300



No of logged in Users: 260

JMS Pending Messages: 0



Any queries or clarifications, leave me a comment and I'll try to get back.

Thursday, 7 May 2009

Slow Weblogic Response Part 3 - Tuning File Descriptors

This article is a follow-up to the Overall Tuning considerations discussed earlier.
A very important Operating System value to be tuned are the File Descriptors

File Descriptors and relation to Sockets

A File Descriptor (FD) is a handle created by a process when a file is opened. Each process can use a set limit of FDs and this is usually an OS level setting.
In the Solaris 8 version, the default is 1024. In later Solaris releases, the default is 65,536 but this needs to be set in the /etc/system as given below.
So, the default available for a process on an untuned OS is 1024.


# ulimit -a
time(seconds) unlimited
file(blocks) unlimited
data(kbytes) unlimited
stack(kbytes) 8192
coredump(blocks) unlimited
nofiles(descriptors) 2048
memory(kbytes) unlimited





In a JEE server, each incoming request uses a TCP socket and this socket consumes a file descriptor from the total available for the process.

As the number of requests coming into the server increase, you can face a situation where there are many sockets open and thus you run out of FDs
This could happen if you have a large number of clients (~1000 or more). This could also happen if you have HTTP connections with keep-alive turned off and so a lot of sockets are in TIME_WAIT.

If you have Stuck Threads, then those keep the FDs open and those wont be closed until the thread is released.

File Descriptors limit exceeded

A Weblogic server throws the below error when the FD limit has been exhausted.



BEA-000204 java.net.SocketException: Too many open files

OR

java.io.IOException: Too many open files
at java.lang.UNIXProcess.forkAndExec(Native Method)



When a Weblogic server starts up, there is an INFO message in the logs stating how many FDs have been allocated to the process.


<<WLS Kernel>> <> <BEA-000415> <System has file descriptor limits of - soft: 2,048, hard: 2,048>

<main> <<WLS Kernel>> <> <BEA-000416> <Using effective file descriptor limit of:

2,048 open sockets/files.>



To display a process' current file descriptor limit on Solaris, the command is:



/usr/proc/bin/pfiles <pid> grep rlimit

Example:

$/usr/proc/bin/pfiles 9052 grep rlimit

Current rlimit: 8192 file descriptors



You can check the actual number of FDs being used by a running server at any time using this command:




Note: This is not the max limit but the actual number used by the process at that point of time

ls /proc/<pid>/fd wc –l




If you monitor this on a regular basis, you can see how often you are nearing the max limit set, and whether the values need to be tuned to support the peak loads and peak traffic timings for your server.

As an example, we plotted FDs used every 5 minutes as below












Time

FD Used
11:00389
11:05429
11:10748
11:15488
11:20337
11:25595



Increasing File Descriptors

The Hard limit is the max value set on the OS.

The Soft limit shows the value set for a particular child process on the OS. This cannot be higher than the hard limit.
You can change the soft limit on-the-fly by using


ulimit -Sn 8192


This will only for that that telnet session and will not be a permanent change. Add this command to the user's .profile file to avoid repeating each time.

If hard limits need to be set, root user needs to update the /etc/system file and the machine to be rebooted. Even though the rlim_fd_max default on Solaris 9 is 65536, it must be in the /etc/system file

The values to be changed are

rlim_fd_max (default hard limit)
rlim_fd_cur (default soft limit)

On Linux:

To increase the hard limit it to (say) 65535, use the following command (as root):

echo "65535" > /proc/sys/fs/file-max

To make this value to survive a system reboot, add it to /etc/sysctl.conf and specify the maximum number of open files permitted:

fs.file-max = 65535


Any queries or clarifications, leave me a comment and I'll try to get back.

Wednesday, 6 May 2009

Slow Weblogic Response Part 2 - Overall Tuning Considerations




Part 1 of this article was related to using the JSP precompilation and tuning the recompilation settings provided in Weblogic.

Now, lets take a look at the other parameters that require tuning and setting correctly, in order for the Weblogic server to be generating acceptable performance of your site.

Usually the problem statement is similar to this:

1. The site regularly stops responding and all available execute threads are consumed. All future requests fail to be handled and a server restart is required.
2. The page response times of the entire site are too high and need to be brought down to a more useable level.

Note: An additional post has been published which provides a skeleton Action Plan for analyzing the entire slow site/high CPU issue including managing stakeholder expectations and appropriate reporting.

Operating System Review
A review of the Operating System configuration needs to look at
a) The number of file descriptors available and whether that matches the value recommended for Weblogic
b) Various TCP settings (also called NDD parameters) which will affect how long it takes to recycle a closed or dropped connection.
2 top settings are:
• File Descriptor limit: increase to 8192 if not 65536. A detailed follow-up on File Descriptors is published here.
• tcp_time_wait_interval: change to 60000 [The default in Solaris 8 is 4 minutes used to keep a socket open in TIME_WAIT state, even after the response is provided to the client, set this down to 1 minute. The default in Solaris 9 is 1 minute]
Check the Oracle site for the latest recommended values

Database Usage and JDBC Drivers
a) If you get Out Of Memory errors occurring in JDBC calls, it is recommended that the JDBC driver be upgraded to the latest version.

b) Prepared Statement caching is a feature that allows prepared statements to be held in a cache on the database so they do not have to be re-parsed for each call. This functionality needs to be enabled per Connection Pool and can have a significant impact on the performance of the pools. But this needs to be validated with a focussed round of Performance testing.

It should be noted that for every Prepared Statement that is held in cache, a cursor is held open on the database for each connection in the pool. So if a cache size of 10 is used on the abcPool, and the pool has a size of 50 then 500 open cursors will be required. Repeatable load tests will highlight any gains achieved by enabling this caching.

Review JDBC Pool Sizes
Review connection pool versus the number of Execute threads. Usually keep Pool size close to Execute thread size. Note: This applies to versions earlier than Weblogic 9. See detailed explanation below.

If the JDBC pool size is quite less compared to the Thread size, there is the potential to negatively impact performance quite dramatically, since threads have to wait for connections to be returned to the pool.
Your most frequently used pool should have their minimum (initial) and maximum sizes increased to the number of Execute threads plus one. This will mean there is an available connection for every thread.
One comment on pool sizing it is beneficial where ever possible to have the initial and max connections set to the same size for a JDBCPool as this avoids expanding/shrinking work that can be costly, for both the establishment of new connections while expanding the pool and housekeeping work for the pool.

However it is also recommended to monitor the JDBC pools during peak hours and see how many connections are being used at maximum. If you are not hitting the MaxCapacity, it is useful to reduce the MaxCapacity to avoid unnecessary open cursors on the database.

Note: As of Weblogic 9 and higher, Execute Queues are now replaced by Work Managers. Work Managers can be used for JDBC pools by defining the max-threads-constraint to define how many threads to allocate for a particular Datasource.
It is possible to run Weblogic 9 and 10 with the Execute Queues as available earlier. This is not recommended since Work Managers are self-tuning and more advanced than Execute Threads.

WebLogic Server, Version 8.1, implemented Execute Queues to handle thread management in which you created thread-pools to determine how workload was handled. WebLogic Server still
provides Execute Queues for backward compatibility, primarily to facilitate application
migration. However, when developing new applications, you should use Work Managers to perform thread management more efficiently.
You can enable Execute Queues in the following ways:
􀁺 Using the command line option
-Dweblogic.Use81StyleExecuteQueues=true
􀁺 Setting the Use81StyleExecuteQueues property via the Kernel MBean in config.xml.
Enabling Execute Queues disables all Work Manager configuration and thread self tuning.
Execute Queues behave exactly as they did in WebLogic Server 8.1.


Database Persistent JMS Queues

Verify whether the database architecture is such that persistent JMS queues use the same database instance as a message store as your Weblogic portal uses for data.
As the volumes on these queues increase this could significantly degrade the performance of the portal by competing for valuable CPU cycles on the database server.

1. Move the message store for persistent queues to a separate database instance from that used by most of the JDBC pools belonging to the Weblogic server. This will prevent increases in message volumes from adversely affecting the performance of the database, which would also slow the portal applications down and vice-versa.

2. Implement paging with a file-store. This allows the amount of memory consumed by JMS queues to be restricted by paging message contents to disk and only holding headers in memory. Note that this does not provide failover protection in the way persistence does and performs better with a paging file-store than a paging JDBC store.

3. It is recommended that a review is also undertaken to determine exactly which queues are persisted and whether they truly need to be. The performance gains from switching to non-persisted queues are substantial, and guaranteed delivery is not always required.

Review Number of Execute Threads
A common mistake made by Support teams when seeing Stuck threads is to increase the number of execute threads in a single ‘default’ queue. At one time, I have worked in a project which ran the Weblogic server with 95 threads.

This figure is very high and results in a large amount of context-switching as load increases, which consumes valuable CPU cycles. Because threads consume memory, you can degrade performance by increasing the value of the Thread Count attribute unnecessarily.

Taking a Thread Dump when the server is not responding will show what the threads are doing, and help identify whether there is an application coding issue or deadlock occuring. Use Samurai to analyze these as earlier posted
It is recommended that regular monitoring of the number of idle threads and the length of queued requests for each execute queue is set up via MBeans. This allows the Support teams to plot a graph of utilization and validation of any changed values.
Note: As of Weblogic 9 and higher, Execute Queues are now replaced by Work Managers. Another good link is here

Use Dedicated Execute Thread Queue for Intensive applications

As the number of threads is small, and if a particular application is seen to utilize a majority of the execute, the following 2 approaches are suggested to resolve the issue:
1. Review the design of the offending application to determine whether it really needs so many threads.
2. Move the offending application to a dedicated execute queue, with enough threads allocated to this queue. This will prevent it from starving the main server of threads and allow the ‘default’ queue to remain with a lower number of Execute threads. This split of Execute Queue can be done at servlet or webapp level. Mail me if you need an example, we've done both successfully in WL 8.1 and 9.

Note: However, as of Weblogic 9 and higher, Execute Queues are now replaced by Work Managers. You can use a Work Manager to dedicate resources at Application, Web App, EJB level.


Review Java VM Settings
Tuning the JVM settings for the Total Heap, and Young/Old Generations is essential to regulate the frequency of Garbage Collection on the servers. The basic primer is on the Sun website, and a follow up of actual values and learnings is published here. The most essential ones are Xms, Xmx for the total Heap and NewSize, NewRatio for the Young Generation. Also set PermSize and MaxPermSize appropriately to avoid consuming high memory

Other Areas

1. To speed up server start times, do not delete the .wlnotdelete directories at startup - Unless you are deploying changed application jars and code.
Be aware you might occasionally see a problem shutting down the server which goes into an UNKNOWN state due to too many old temp files and wl_internal files.
You will get the dreaded error below which can only be resolved by killing the process and clearing out all temp files, .lck files etc within the domain. The files are under DOMAIN_HOME/servers//



weblogic.management.NoAccessRuntimeException: Access not allowed for subject: principals=[weblogic, Deployers], on ResourceType: ServerLifeCycleRuntime Action: execute, Target: shutdown
at weblogic.rjvm.ResponseImpl.unmarshalReturn(ResponseImpl.java:195)
at weblogic.rmi.internal.BasicRemoteRef.invoke(BasicRemoteRef.java:224)



2. Avoid the URLEncoder.encode method call as much as possible. This calls is not optimal in most JDKs < 1.5 and is often found as memory and CPU hotspot issue.

3. Check the network connection between WebLogic and the database. If Thread dumps show that threads are often in a suspended state (waiting for so long that they were suspended) while doing a socket read from the database.
The DBA wouldn't see this as a long-running SQL statement. This needs to be checked out at the network level.

4. Switch off all DEBUG options on Production on app server as well as web server and web server plugins.

5. Ensure log files are rotated so that they can be backed up and moved off the main log directory. Define a rotation policy based on file size or fixed time (like 24 hours)
However also note that: On certain platforms, if some application is tailing
the log at the time of rotation, the rotation fails. Stop the application tailing and reopen the tail after the rotation is complete.

6. Do not use "Emulate Two-Phase Commit for non-XA Driver" for DataSources.
It is not a good idea to use emulated XA. It can result in loss of data integrity and can cause heuristic exceptions. This option is only meant for use with databases for which there is no XA driver so that the datasources for these pools can still participate in XA transactions.
If an XA driver is available (and there is for Oracle), it should be used. If this option is selected because of problems with Oracle's Thin XA driver, try the newest version, or pick a different XA driver.

7. The <save-sessions-enabled> element in web.xml controls whether session data is cleaned up during redeploy or undeploy.
It affects memory and replicated sessions. The default is false. Setting the value to true means session data is saved and this is an overhead.

8. If firewalls are present between Weblogic server and the database or an external system connecting via JMS, this can cause transactional and session timeout issues. The session timeout on the firewall shoudl be configured to allow for normal transaction times.
In the case of JMS, transactional interoperation between the two servers can be compromised and hence it is beneficial to open the firewall between the two servers so that RMI/T3 connections can be made freely.


Any queries or clarifications, leave me a comment and I'll try to get back.

Slow Weblogic response - JSP and Servlet Reload

The top Weblogic tuning tip I've used over the last few years is the one below. This article deals with Weblogic 8 and 9.

You know the symptoms:

JEE Application works okay on Unit Integration and Test servers.
But once you're into Performance and Stress Testing - it's totally choked up - pages are returned so slowly - Project Manager is asking for a code review to find the blockers - etc.


The first thing to do on a slow JEE application is to take a thread dump, see more on that here

Now, the thing to look for in the Thread Dump, is multiple threads doing this:


weblogic.servlet.jsp.JspStub.checkForReload(JspStub.java:144)


or this:


waiting for monitor entry [c4f7f000..c4f7fc28]
at weblogic.servlet.internal.ServletStubImpl.checkForReload(ServletStubImpl.java:771)




This is not a stuck thread within your code – but this is a weblogic.xml setting for checking each time if your servlet class or JSP has been changed.

As documented here:

http://e-docs.bea.com/wls/docs92/webapp/weblogic_xml.html#wp1038491

Sets the interval, in seconds, at which WebLogic Server checks to see if JSP files have changed and need recompiling. Dependencies are also checked and recursively reloaded if changed.

If set to 0, pages are checked on every request. If set to -1, page checking and recompiling is disabled.

Most users set this to -1 to disable in Production unless they're altering jsps on the fly (which is not normal practice on a production system).


The correct syntax for this is:

<weblogic-web-app>
<jsp-descriptor>
<jsp-param>
<param-name>pageCheckSeconds</param-name>
<param-value>-1</param-value>

</jsp-param>
</jsp-descriptor>

The similar value for servlets is set using the servlet-reload-check-secs within the tags below.

<container-descriptor>
<servlet-reload-check-secs>-1</servlet-reload-check-secs>
</container-descriptor>
</weblogic-web-app>


On non-production environments, you can set this to a reasonable value such as 600 (in seconds, which is 10 minutes) to allow for changed JSPs you drop in/FTP to the deployed environment, and want those to reflect sooner.

The other thing we must do on a production system is to precompile the JSPs as part of the build, and before deploying to Live.

Else, even the first time JSP compilation is quite slow and this must be avoided on the Production system.


For the same solution on Weblogic 10, look at an updated article.

Wednesday, 15 April 2009

com.vladium.emma.EMMARuntimeException: [CLASS_STAMP_MISMATCH]

Emma is used for finding the code coverage of JUnit test classes.

When running Emma with a JEE container, the source .class files (not the .java files) are first instrumented and then deployed on the J2EE server.

Then the test classes are executed from ant command line or via an Eclipse plug-in, and this will write out the coverage data into .em or .ec files.

Sometimes, an error observed when running Emma is


emma.report:
[report] processing input files ...
[report] 2 file(s) read and merged in 172 ms
[report] com.vladium.emma.EMMARuntimeException: [CLASS_STAMP_MISMATCH] runtime version of class [com.xyz.action.MyAction] in the coverage data is not consistent with the version of this class in the metadata, possibly because stale metadata is being used for report generation.

[report] at com.vladium.emma.report.ReportDataModel.getView(ReportDataModel.java:95)
[report] at com.vladium.emma.report.AbstractReportGenerator.initialize(AbstractReportGenerator.java:207)
[report] at com.vladium.emma.report.html.ReportGenerator.process(ReportGenerator.java:85)
[report] at com.vladium.emma.report.ReportProcessor._run(ReportProcessor.java:254)





To calculate the coverage, EMMA combines data of two types:
metadata (static info about your Java classes, methods, lines, and basic blocks) and runtime coverage data (which basic blocks have been executed)

The error occurs when there is a mismatch in the versions of the classes used while recording the metadata.em and coverage.em files. The runtime version of the class in the coverage data is not consistent with the same class in the metadata

In simpler terms, the in-container application jars deployed on the JEE server are not in sync with the currently instrumented source from where the emma is being run, hence the class stamp difference in .emma files – metadata.em and coverage.em


Update -
A couple of comments posted below have some more checks and solutions to this problem.

Thursday, 2 April 2009

Running Maven offline using local Repository

Sometimes you see Maven downloading POM files for dependencies which are already in your local repository and you're wondering why it does this.

Downloading: http://repo1.maven.org/maven2//org/apache/xmlgraphics/fop/0.95/fop-
0.95.pom
Downloading: http://repo1.maven.org/maven2//org/apache/xmlgraphics/xmlgraphics-c
ommons/1.3.1/xmlgraphics-commons-1.3.1.pom
Downloading: http://repo1.maven.org/maven2//org/apache/xmlgraphics/batik-svg-dom
/1.7/batik-svg-dom-1.7.pom


The reason it does is because when these were installed in the local repository, it was not given a flag of -Dgenerate.pom=true

If you run the install with this flag, then it will generate and install a local POM file for you, and will not download the POM from the internet repository.

An example below
mvn install:install-file -DgroupId=%GROUP_ID% -DartifactId=%ARTIFACT_ID% -Dversion=%VERSION% -Dfile=%COMPONENT%.jar -Dpackaging=jar -DgeneratePom=true




There are 2 other ways to run the build without connecting to the internet or network repository each time - provided you have all the required dependencies available offline.

Note: This is just something I do when I want to run a quick repeat build and I know there are no dependencies updated in the mean time. If you do this regularly you will miss out on updates made to the maven repository by other developers.

1. Run the maven build in offline mode

mvn -o install

[INFO]
NOTE: Maven is executing in offline mode. Any artifacts not already in your loca
l
repository will be inaccessible.

...

When you see this message, you know that maven will not be updating external dependencies during the build.

2. Point the repository to the local file system.

On my local build, if running repeated code builds when I know there is no change in dependency, I have a profile set up to point the repository to a local file system.

In the repository definition, instead of
<url>http://repo1.maven.org/maven2/</url>

use the file:// path


<repository>
<id>central</id>
<url>file://D:\mavenrepo</url>
</repository>


You will still see the message indicating downloading of POM, but this time from the local file system.
And this will be much faster.
Downloading: file://D:\mavenrepo\cactus\1.7.2\cactus-1.7.2.pom
Downloading: file://D:\mavenrepo\strutstest\2.1.3\strutstest-2.1.3.pom

Friday, 13 March 2009

java.lang.ClassNotFoundException: org.apache.tools.ant.Main

This error should not be occuring when you install Ant out-of-the-box and run it. However, on the rare occasions it does occur - the solution is quite simple.



java.lang.ClassNotFoundException: org.apache.tools.ant.Main
at java.net.URLClassLoader$1.run(URLClassLoader.java:200)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:188)
at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
at java.lang.ClassLoader.loadClass(ClassLoader.java:251)
at org.apache.tools.ant.launch.Launcher.run(Launcher.java:244)
at org.apache.tools.ant.launch.Launcher.main(Launcher.java:67)



Take a look at the ANT_HOME/lib folder - and ensure that the ant.jar is in there and showing a size of 1MB or higher.

It is usually due to a corrupted version of ant.jar in the above path.

Monday, 9 March 2009

Executing multiple mvn commands from a Windows bat file

When running Maven, if you try to run multiple Maven commands in a batch (.bat) file, it only runs the first one and exits to the command prompt
mvn clean
mvn package
This occurs since mvn itself is a bat file. To overcome this, you need to stop Windows passing control to the mvn.bat script by using the call command for each call to mvn.
call mvn clean
call mvn package

Additionally, to catch failures you will need to do this

call mvn clean
echo Exit Code = %ERRORLEVEL%
if not "%ERRORLEVEL%" == "0" exit /b

call mvn package
echo Exit Code = %ERRORLEVEL%
if not "%ERRORLEVEL%" == "0" exit /b

A failure throws an ERRORLEVEL higher than 0. Otherwise it will keep moving ahead even if it fails. Note: if you're using maven 2.0.7 or older, there is a possibility that the mvn does not return an ERRORLEVEL > 0 for the process to exit. Lots of examples are available here http://www.google.co.uk/search?hl=en&q=mvn+exit+ERRORLEVEL&meta= A simple test to check if your version allows this - run the commands below in a bat file.
call mvn unknown
echo Exit Code = %ERRORLEVEL%
You should get a Failure with exit code 1 as below. If it does not fail, then upgrade to a higher version of Maven in which the issue is fixed.
D:\maven-2.0.7\bin>call mvn unknown
[INFO] Scanning for projects...
[INFO] ----------------
[ERROR] BUILD FAILURE
[INFO] ----------------
[INFO] Invalid task 'unknown': you must 
specify a valid lifecycle phase, 
or a goal in the format plugin:goal or  
pluginGroupId:pluginArtifactId:pluginVersion:goal
[INFO] ----------------
[INFO] For more information, run Maven 
with the -e switch
[INFO] ----------------
[INFO] Total time: < 1 second
[INFO] Finished at: Tue Mar 10 12:07:12 
GMT+05:30 2009
[INFO] Final Memory: 1M/2M
[INFO] ----------------
Exit Code = 1

Update Learnt the hard way that the call to mvn must immediately be followed by the checking of the ERRORLEVEL. Our build.bat script measures time taken for each step like below:

time /t
call mvn install
time /t
if not "%ERRORLEVEL%" == "0" exit /b

In this case, it loses the ERRORLEVEL value from the mvn command. Hence we need to ensure:

time /t
call mvn install
if not "%ERRORLEVEL%" == "0" exit /b
time /t

How to connect Raspberry Pi Pico to a external temperature sensor(DHT11 or DHT22)

How to connect Raspberry Pi to DHT 11 / DHT 22   Connect your DHT11 sensor to the Pico accordingly -   Left pin (Signal) - GPIO Pin 22 (or a...