`
iwindyforest
  • 浏览: 230010 次
  • 性别: Icon_minigender_1
  • 来自: 上海
社区版块
存档分类
最新评论

JVM Learning Note 2 -- Garbage Collection and Memory Allocation Strategy

 
阅读更多

Method to check if an object  is finalizable

Reference Counting Method

A reference counter is created with its object,

Its value is increased by one at a time when its object gets one reference.

Its value is decreased by one at a time when its object loses one reference.

Its object becomes a garbage when values of reference counter becomes zero.

But Reference Counting Method is not used by JVM, because it is very difficult to resolve the situation when two objects inter-referenced by each other.

objA.instance = objB

objB.instance = objA

objA=null; objB=null

System.gc()

 

Root Tracing Method

GC Roots includes the following types:

·         Objects referenced by references stored in local variables table of Java Stack

·         Objects referenced by static attributes of classes in Method Area (Perm Gen)

·         Objects referenced by constants in Constant Pool of Method Area(Perm Gen)

·         Object reference by Native Methods (JNI) in Native Stack

GC Chain :

·         The chain begins from GC Roots to the ended object.

 


Reference Types

There are four types of references introduced since JDK1.2, and they are:

Reference Type

Implementation

How to get Referent

GC Condition

Usage Senario

Strong Reference

References like: Object strongRef= new Object()

Direct Access

Not Root Traceable

Commonly  Used

Soft Reference

java.lang.ref.SoftReference

 

SoftReference.get()

Before OOM exception is thrown and

-XX:SoftRefLRUPolicyMSPerMB

Memery-Sensitive Cache

Weak Reference

java.lang.ref.WeakReference

WeakRefernce.get()

Before Next GC

Canonicalizing mappings(e.g: WeakHashMap)

Phantom Reference

java.lang.ref.PhantomReference

All way return null by PhantomReference.get(), but can get status by ReferenceQueue.poll().

Before Next GC

Get an notification when referent is being recycled.

 


F-Queue and finalize()

 

 

 

·If an object is not reachable through any Reference Chain,  the object will be marked and checked if its finalize() method can be called.

·  If its finalize() method has not been re-written or its finalize() method has alreay been called, the object will not be enqueued into F-Queue;

      Ø  This object will be finalized by JVM, and its taken memory will be reclaimed.

·   Otherwise, this object will be put into F-Queue.

      Ø  In F-Queue, the finalize() method of this object will be called.

      Ø  If in its re-written finalize() method, the object connected its self with any object which is in any Reference Chain, it will be Revived, and removed from the F-Queue.

 

 

From the diagram, we can understand that:

·         For  all the objects, finalize() method can only be called once.

·         If an object has re-written finalize() method, most of the times, the object would pass through at least two times of GC.

·         If an object is finalized(destroyed) by JVM, there is no way it can get back to reachable.

 

 

 

 

 

 

Garbage Collection Algorithms

Mark-Sweep Algorithm

The mark-and-sweep algorithm consists of two phases:

  • In the first phase, it finds and marks all accessible objects. The first phase is called the mark phase.
  • In the second phase, the garbage collection algorithm scans through the heap and reclaims all the unmarked objects. The second phase is called the sweep phase.

 

As shown in above graph, before GC Gray Block means unreachable objects, after GC(Mark-Sweep), they are cleaned from heap.

 

The disadvantage of Mark-Sweep Algorithm:

  • low efficency on Sweep stage, normal program execution is suspended while Mark-Sweep process
  • Memory of Swept Objects becomes Memory Fragments after GC, heap memory may be too fragmented to allocate memory for big Object or Array.

Mark-Compact Algorithm

The mark-and-compact algorithm consists of two phases:

  • In the first phase, it finds and marks all live objects. The first phase is called the mark phase.
  • In the second phase, the garbage collection algorithm compacts the heap by moving all the live objects into contiguous memory locations. The second phase is called the compaction  phase.

Compared with Mark-Sweep algorithm, The mark-and-compact algorithm avoided the fragments generation after GC, but takes longer time for the defragment work.

 

 

Stop-Copy Algorithm

When using the stop-and-copy garbage collection algorithm, the heap is divided into two separate regions. At any point in time, all dynamically allocated object instances reside in only one of the two regions--the active region. The other, inactive region is unoccupied.

 

When the memory in the active region is exhausted, the program is suspended and the garbage-collection algorithm is invoked. The stop-and-copy algorithm copies all of the live objects from the active region to the inactive region. As each object is copied, all references contained in that object are updated to reflect the new locations of the referenced objects.

 

After the copying is completed, the active and inactive regions exchange their roles. Since the stop-and-copy algorithm copies only the live objects, the garbage objects are left behind. In effect, the storage occupied by the garbage is reclaimed all at once when the active region becomes inactive.

As the stop-and-copy algorithm copies the live objects from the active region to the inactive region, it stores the objects in contiguous memory locations. Thus, the stop-and-copy algorithm automatically defragments the heap. This is the main advantage of the stop-and-copy approach over the mark-and-sweep algorithm described in the preceding section.

The disadvantage of Stop-and-Copy Algorithm:

Only half of the heap memory can be used all the time.

 

 

Generational Collection

Minor Garbage Collections

A minor collection is triggered when Eden becomes full. This is done by copying all the live objects in the new generation to either a survivor space or the tenured space as appropriate. Copying to the tenured space is known as promotion or tenuring. Promotion occurs for objects that are sufficiently old (– XX:MaxTenuringThreshold=<n>), or when the survivor space overflows.

Live objects are objects that are reachable by the application; any other objects cannot be reached and can therefore be considered dead. In a minor collection, the copying of live objects is performed by first following what are known as GC Roots, and iteratively copying anything reachable to the survivor space. GC Roots normally include references from application and JVM-internal static fields, and from thread stack-frames, all of which effectively point to the application’s reachable object graphs.

In generational collection, the GC Roots for the new generation’s reachable object graph also include any references from the old generation to the new generation. These references must also be processed to make sure all reachable objects in the new generation survive the minor collection. Identifying these cross-generational references is achieved by use of a “card table”. The Hotspot card table is an array of bytes in which each byte is used to track the potential existence of cross-generational references in a corresponding 512 byte region of the old generation. As references are stored to the heap, “store barrier” code will mark cards to indicate that a potential reference from the old generation to the new generation may exist in the associated 512 byte heap region. At collection time, the card table is used to scan for such cross-generational references, which effectively represent additional GC Roots into the new generation. Therefore a significant fixed cost of minor collections is directly proportional to the size of the old generation.

There are two survivor spaces in the Hotspot new generation, which alternate in their “to-space” and “from-space” roles. At the beginning of a minor collection, the to-space survivor space is always empty, and acts as a target copy area for the minor collection. The previous minor collection’s target survivor space is part of the from-space, which also includes Eden, where live objects that need to be copied may be found.

The cost of a minor GC collection is usually dominated by the cost of copying objects to the survivor and tenured spaces. Objects that do not survive a minor collection are effectively free to be dealt with. The work done during a minor collection is directly proportional to the number of live objects found, and not to the size of the new generation. The total time spent doing minor collections can be almost be halved each time the Eden size is doubled. Memory can therefore be traded for throughput. A doubling of Eden size will result in an increase in collection time per-collection cycle, but this is relatively small if both the number of objects being promoted and size of the old generation is constant.

Note: In Hotspot minor collections are stop-the-world events. This is rapidly becoming a major issue as our heaps get larger with more live objects. We are already starting to see the need for concurrent collection of the young generation to reach pause-time targets.

Major Garbage Collections

Major collections collect the old generation so that objects can be promoted from the young generation. In most applications, the vast majority of program state ends up in the old generation. The greatest variety of GC algorithms exists for the old generation. Some will compact the whole space when it fills, whereas others will collect concurrently with the application in an effort to prevent it from filling up.

The old generation collector will try to predict when it needs to collect to avoid a promotion failure from the young generation. The collectors track a fill threshold for the old generation and begin collection when this threshold is passed. If this threshold is not sufficient to meet promotion requirements then a “FullGC” is triggered. A FullGC involves promoting all live objects from the young generations followed by a collection and compaction of the old generation. Promotion failure is a very expensive operation as state and promoted objects from this cycle must be unwound so the FullGC event can occur.

Note: To avoid promotion failure you will need to tune the padding that the old generation allows to accommodate promotions (‑XX:PromotedPadding=<n>).

Note: When the Heap needs to grow a FullGC is triggered. These heap-resizing FullGCs can be avoided by setting –Xms and –Xmx to the same value.

Other than a FullGC, a compaction of the old generation is likely to be the largest stop-the-world pause an application will experience. The time for this compaction tends to grow linearly with the number of live objects in the tenured space.

The rate at which the tenured space fills up can sometimes be reduced by increasing the size of the survivor spaces and the age of objects before being promoted to the tenured generation. However, increasing the size of the survivor spaces and object age in Minor collections (–XX:MaxTenuringThreshold=<n>) before promotion can also increase the cost and pause times in the minor collections due to the increased copy cost between survivor spaces on minor collections.

 

 

 

Garbage Collectors Introduction

Serial Collector

The Serial collector (-XX:+UseSerialGC) is the simplest collector and is a good option for single processor systems. It also has the smallest footprint of any collector. It uses a single thread for both minor and major collections. Objects are allocated in the tenured space using a simple bump the pointer algorithm. Major collections are triggered when the tenured space is full.

Parallel Collector

The Parallel collector comes in two forms. The Parallel collector (‑XX:+UseParallelGC) which uses multiple threads to perform minor collections of the young generation and a single thread for major collections on the old generation. The Parallel Old collector (‑XX:+UseParallelOldGC) , the default since Java 7u4, uses multiple threads for minor collections and multiple threads for major collections. Objects are allocated in the tenured space using a simple bump the pointer algorithm. Major collections are triggered when the tenured space is full.

On multiprocessor systems the Parallel Old collector will give the greatest throughput of any collector. It has no impact on a running application until a collection occurs, and then will collect in parallel using multiple threads using the most efficient algorithm. This makes the Parallel Old collector very suitable for batch applications.

The cost of collecting the old generations is affected by the number of objects to retain to a greater extent than by the size of the heap. Therefore the efficiency of the Parallel Old collector can be increased to achieve greater throughput by providing more memory and accepting larger, but fewer, collection pauses.

Expect the fastest minor collections with this collector because the promotion to tenured space is a simple bump the pointer and copy operation.

For server applications the Parallel Old collector should be the first port-of-call. However if the major collection pauses are more than your application can tolerate then you need to consider employing a concurrent collector that collects the tenured objects concurrently while the application is running.

Note: Expect pauses in the order of one to five seconds per GB of live data on modern hardware while the old generation is compacted.

Note: The parallel collector can sometimes gain performance benefits from -XX:+UseNUMA on multi-socket CPU server applications by allocating Eden memory for threads local to the CPU socket. It is a shame this feature is not available to the other collectors.

Concurrent Mark Sweep (CMS) Collector

The CMS (-XX:+UseConcMarkSweepGC) collector runs in the Old generation collecting tenured objects that are no longer reachable during a major collection. It runs concurrently with the application with the goal of keeping sufficient free space in the old generation so that a promotion failure from the young generation does not occur.

Promotion failure will trigger a FullGC. CMS follows a multistep process:
  1. Initial Mark : Find GC Roots.
  2. Concurrent Mark: Mark all reachable objects from the GC Roots.
  3. Concurrent Pre-clean: Check for object references that have been updated and objects that have been promoted during the concurrent mark phase by remarking.
  4. Re-mark : Capture object references that have been updated since the Pre-clean stage.
  5. Concurrent Sweep: Update the free-lists by reclaiming memory occupied by dead objects.
  6. Concurrent Reset: Reset data structures for next run.
As tenured objects become unreachable, the space is reclaimed by CMS and put on free-lists. When promotion occurs, the free-lists must be searched for a suitable sized hole for the promoted object. This increases the cost of promotion and thus increases the cost of the Minor collections compared to the Parallel Collector.

Note: CMS is not a compacting collector, which over time can result in old generation fragmentation. Object promotion can fail because a large object may not fit in the available holes in the old generation. When this happens a “promotion failed” message is logged and a FullGC is triggered to compact the live tenured objects. For such compaction-driven FullGCs, expect pauses to be worse than major collections using the Parallel Old collector because CMS uses only a single thread for compaction.

CMS is mostly concurrent with the application, which has a number of implications. First, CPU time is taken by the collector, thus reducing the CPU available to the application. The amount of time required by CMS grows linearly with the amount of object promotion to the tenured space. Second, for some phases of the concurrent GC cycle, all application threads have to be brought to a safepoint for marking GC Roots and performing a parallel re-mark to check for mutation.

Note: If an application sees significant mutation of tenured objects then the re-mark phase can be significant, at the extremes it may take longer than a full compaction with the Parallel Old collector.

CMS makes FullGC a less frequent event at the expense of reduced throughput, more expensive minor collections, and greater footprint. The reduction in throughput can be anything from 10%-40% compared to the Parallel collector, depending on promotion rate. CMS also requires a 20% greater footprint to accommodate additional data structures and “floating garbage” that can be missed during the concurrent marking that gets carried over to the next cycle.

High promotion rates and resulting fragmentation can sometimes be reduced by increasing the size of both the young and old generation spaces.

Note: CMS can suffer “concurrent mode failures”, which can be seen in the logs, when it fails to collect at a sufficient rate to keep up with promotion. This can be caused when the collection commences too late, which can sometimes be addressed by tuning. But it can also occur when the collection rate cannot keep up with the high promotion rate or with the high object mutation rate of some applications. If the promotion rate, or mutation rate, of the application is too high then your application might require some changes to reduce the promotion pressure. Adding more memory to such a system can sometimes make the situation worse, as CMS would then have more memory to scan.

Garbage First (G1) Collector

G1 (-XX:+UseG1GC) is a new collector introduced in Java 6 and now officially supported as of Java 7u4. It is a partially concurrent collecting algorithm that also tries to compact the tenured space in smaller incremental stop-the-world pauses to try and minimize the FullGC events that plague CMS because of fragmentation. G1 is a generational collector that organizes the heap differently from the other collectors by dividing it into a large number (~2000) of fixed size regions of variable purpose, rather than contiguous regions for the same purpose.

 

 G1 takes the approach of concurrently marking regions to track references between regions, and to focus collection on the regions with the most free space. These regions are then collected in stop-the-world pause increments by evacuating the live objects to an empty region, thus compacting in the process.  The regions to be collected in a cycle are known as the Collection Set.

Objects larger than 50% of a region are allocated in humongous regions, which are a multiple of region size. Allocation and collection of humongous objects can be very costly under G1, and to date has had little or no optimisation effort applied.

The challenge with any compacting collector is not the moving of objects but the updating of references to those objects. If an object is referenced from many regions then updating those references can take significantly longer than moving the object. G1 tracks which objects in a region have references from other regions via the “Remembered Sets”. Remember Sets are collections of cards that have been marked for mutation. If the Remembered Sets become large then G1 can significantly slow down. When evacuating objects from one region to another, the length of the associated stop-the-world event tends to be proportional to the number of regions with references that need to be scanned and potentially patched.

Maintaining the Remembered Sets increases the cost of minor collections resulting in pauses greater than those seen with Parallel Old or CMS collectors for Minor collections.

G1 is target driven on latency –XX:MaxGCPauseMillis=<n>, default value = 200ms. The target will influence the amount of work done on each cycle on a best-efforts only basis. Setting targets in tens of milliseconds is mostly futile, and as of this writing targeting tens of milliseconds has not been a focus of G1.

G1 is a good general-purpose collector for larger heaps that have a tendency to become fragmented when an application can tolerate pauses in the 0.5-1.0 second range for incremental compactions. G1 tends to reduce the frequency of the worst-case pauses seen by CMS because of fragmentation at the cost of extended minor collections and incremental compactions of the old generation. Most pauses end up being constrained to regional rather than full heap compactions.

Like CMS, G1 can also fail to keep up with promotion rates, and will fall back to a stop-the-world FullGC. Just like CMS has “concurrent mode failure”, G1 can suffer an evacuation failure, seen in the logs as “to-space overflow”. This occurs when there are no free regions into which objects can be evacuated, which is similar to a promotion failure. If this occurs, try using a larger heap and more marking threads, but in some cases application changes may be necessary to reduce allocation rates.

A challenging problem for G1 is dealing with popular objects and regions. Incremental stop-the-world compaction works well when regions have live objects that are not heavily referenced from other regions. If an object or region is popular then the Remembered Set will be large, and G1 will try to avoid collecting those objects. Eventually it can have no choice, which results in very frequent mid-length pauses as the heap gets compacted.

 

 

 

 

 ----------------------

JVM垃圾收集器使用调查:CMS最受欢迎

近日,Plumbr公司对特定垃圾收集器(GC)使用情况进行了一次调查研究。

本次研究的数据来自代表2670个不同使用环境的84936个案例。其中,13%的环境已经明确指定了一个垃圾收集器,其余的根据JVM而定。在指定了明确垃圾收集器的11062个案例中,根据每个垃圾收集器使用的统计次数,研究人员做出了下面的垃圾收集器饼图:

 

名词解释

    Serial:串行收集器,当进行垃圾收集时,会暂停所有线程
    Parallel:并行收集器,是串行收集器的多线程版本,多CPU下
    ParallelOld:老年代的Parallel版本
    ConcMarkSweep:简称CMS,是并发收集器,将部分操作与用户线程并发执行
    CMSIncrementalMode:CMS收集器变种,属增量式垃圾收集器,在并发标记和并发清理时交替运行垃圾收集器和用户线程
    G1:面向服务器端应用的垃圾收集器,计划未来替代CMS收集器

 

 

 

87%的案例没有指定垃圾收集器

在解释垃圾收集器使用情况的详情之前,我们先看下其他87%的案例为什么没有出现在上面的饼图中。从研究结果来看,有2个不同的原因导致了该情况的出现:

  •     JVM对于默认情况的处理十分合理,开发人员无需指定垃圾收集器
  •     对部分团队来说,程序性能可能优先级不高,致使没有指定垃圾收集器


所以,研究团队没有采用使用默认垃圾收集器的JVM案例。话又说回来,默认的垃圾收集器又是什么呢?这个问题既简单又复杂。如果你运行在JVM的客户端模式(Client)下,JVM默认垃圾收集器是串行垃圾收集器(Serial GC,-XX:+USeSerialGC);在JVM服务器模式(Server)下默认垃圾收集器是并行垃圾收集器(Parallel GC,-XX:+UseParallelGC)。至于是运行在JVM的客户端模式还是服务器模式,取决于下面情况:

 

 

 大多数案例没有做出最佳选择

让我们回到已经明确指定垃圾收集器的13%的案例,但仅有一小部分用户的决策是按照上述表格中的建议进行的。据统计,只有31个案例根据自己的机器性能选择了最佳的串行垃圾收集器,考虑到当前服务大多运行在多核服务器上,这个可以理解。

 

 

 

我们从上图可以看出,并行(Parallel)和ParallelOld使用次数很接近。如果觉得并行模式这一新生代收集器更符合你的需求,那就选择它。从第一张表格中我们也可以看出,并行垃圾收集器(Parallel)已经是大多数平台的默认选择。从这个方面讲,如果没有指定明确的垃圾收集器,也并不意味着默认使用的垃圾收集器不流行。

说到CMSIncrementalMode的使用情况,只有935个环境使用了该种垃圾收集器,相比而言,经典的CMS(ConcMarkSweep)则有6655个环境使用了它。这里提示下大家,在并发阶段,垃圾收集器线程会使用一个或多个处理器。增量式垃圾收集器是通过一定的回收算法,把一个长时间的中断,划分为很多个小的中断,以减少垃圾收集器对用户程序的影响。

研究中还有一个结果就是G1的采用率,有826个环境使用了该种垃圾收集器。但同等条件来讲,G1比CMS性能表现会差一些。

 

 HotSpot VM的两个编译器:

 

ClientCompiler(C1):

方法内联:-XX:MaxInlineSize=字节数进行控制。

去虚拟化:进行类的层次的分析,如发现类中的方法只提供一个实现类,那么可以对调用此方法的代码进行方法内联。

DeadCode消除:根据运行状况进行代码折叠或消除。

ServerCompiler(C2):

标量替换:用标量替换聚合量,如:用基本类型替换对象。

栈上分配(TLAB):对于未逃逸对象可以直接在栈上分配,而不是JVM堆上。

同步消除:如果发现同步对象未逃逸,可以去掉同步。

SunJDK之所以未在启动时即编译成机器码,有几方面原因:

根据运行状况来进行动态编译,为C2收集运行数据的越长的时间,编译出来的代码会比静态编译更优越。

解释执行比编译执行更节省内存。

启动时解释执行的启动速度比编译再启动执行更快。

 

 

内存>16G

CPU 4核以上

操作系统64位, Linux服务器

JDK 7+

企业生产环境应用

参考JVM参数设置

 

-Xms6000M  
-Xmx6000M  
-Xmn500M  
-XX:PermSize=500M  
-XX:MaxPermSize=500M  
-XX:SurvivorRatio=65536 
-XX:MaxTenuringThreshold=0 
-Xnoclassgc  
-XX:+DisableExplicitGC  
-XX:+UseParNewGC  
-XX:+UseConcMarkSweepGC  
-XX:+UseCMSCompactAtFullCollection  
-XX:CMSFullGCsBeforeCompaction=0 
-XX:+CMSClassUnloadingEnabled
-XX:+CMSParallelRemarkEnabled  
-XX:CMSInitiatingOccupancyFraction=90 
-XX:SoftRefLRUPolicyMSPerMB=0 
-XX:+PrintClassHistogram  
-XX:+PrintGCDetails  
-XX:+PrintGCTimeStamps  
-XX:+PrintHeapAtGC  
-Xloggc:log/gc.log  
 

 

-XX:SurvivorRatio=65536

-XX:MaxTenuringThreshold=0

就是去掉了Survivor Space

 

-Xnoclassgc

禁用类垃圾回收,性能会高一点

 

 -XX:+DisableExplicitGC禁止System.gc(),免得程序员误调用gc方法影响性能

 

-XX:+UseParNewGC,对年轻代采用多线程并行回收,这样收得快

 

-XX:+UseCMSCompactAtFullCollection, 对老年代用CMS回收, 采取与应用线程竞争方式, 提升响应效率

 

-XX:CMSFullGCsBeforeCompaction=0 , 一直对老年代进行数据压缩

 

-XX:+CMSParallelRemarkEnabled , 在使用CMS回收的情况下使用多线程进行二次标记(Remark)

 

CMSInitiatingOccupancyFraction,

这个参数设置有很大技巧,基本上满足
(Xmx-Xmn)*(100-CMSInitiatingOccupancyFraction)/100>=Xmn
就不会出现promotion failed。
在本应用中Xmx是6000M,Xmn是500M,那么Xmx-Xmn是5500M,也就是年老代有5500M,
CMSInitiatingOccupancyFraction=90说明年老代到90%满的时候开始执行对年老代的CMS,
这时还剩10%的空间是5500*10%=550兆,
所以即使Xmn(也就是年轻代共500M)里所有对象都搬到年老代里,550M的空间也足够了,
所以只要满足上面的公式,就不会出现垃圾回收时的promotion failed;

-XX:SoftRefLRUPolicyMSPerMB=0, 这个参数可以设定软引用所指向的对象从上次起之后可以存活的时间为(堆空闲空间(MB)*这个参数设定的值), 每1MB的空闲堆空间, 软引用所指向的对象可以存活的毫秒数, 默认1000即1秒. 这只是一个近似的理论值因为实际受收集算法和收集线程执行优先度的影响而变化. 假设某一时刻对空闲为20M, 默认值的情况下, 当前时刻某软引用调用的对象可以存活20秒.
-XX:LargePageSizeInBytes=128M , windows不支持, 其它系统必须 -XX:+UseLargePages
改进方案:
-Xmx4000M  
-Xms4000M  
-Xmn600M  
-XX:PermSize=500M  
-XX:MaxPermSize=500M  
-Xss256K  
-XX:+DisableExplicitGC  
-XX:SurvivorRatio=1 
-XX:+UseConcMarkSweepGC  
-XX:+UseParNewGC  
-XX:+CMSParallelRemarkEnabled  
-XX:+UseCMSCompactAtFullCollection  
-XX:CMSFullGCsBeforeCompaction=0 
-XX:+CMSClassUnloadingEnabled  
-XX:LargePageSizeInBytes=128M  
-XX:+UseFastAccessorMethods  
-XX:+UseCMSInitiatingOccupancyOnly  
-XX:CMSInitiatingOccupancyFraction=80 
-XX:SoftRefLRUPolicyMSPerMB=0 
-XX:+PrintClassHistogram  
-XX:+PrintGCDetails  
-XX:+PrintGCTimeStamps  
-XX:+PrintHeapAtGC  
-Xloggc:log/gc.log 
 
使用了Survivor空间, 并且加大到与Eden相等, 为600M.
MaxTenuringThreshold为默认值,
这样即没有暂停又不会有promotoin failed,而且更重要的是,年老代和永久代上升非常慢(因为好多对象到不了年老代就被回收了),所以CMS执行频率非常低,好几个小时才执行一次
另外:
-XX:ParallelGCThreads=N, 其中N为CPU的数量,如果N>8 ,N=CPU数x2
 
Windows使用大页面支持
run -> gpedit.msc
computer configuration -> windows settings -> security setting -> local policy -> User Right Assignment
duble click Lock Page in Memory,
add your own group or user name
restart system,
then -XX:+UseLargePages would be enabled.
 
 
 
 
 
 
 
 
 
 
 

 

  • 大小: 44.6 KB
  • 大小: 45.4 KB
  • 大小: 34.3 KB
  • 大小: 80.2 KB
  • 大小: 59.7 KB
  • 大小: 6.2 KB
  • 大小: 11.5 KB
  • 大小: 13.5 KB
  • 大小: 12.9 KB
  • 大小: 13.9 KB
0
0
分享到:
评论

相关推荐

Global site tag (gtag.js) - Google Analytics