- 浏览: 137248 次
- 性别:
- 来自: 北京
文章分类
- 全部博客 (58)
- java (6)
- android (2)
- 商业 (3)
- 数据库 (5)
- oracle (6)
- 网络监控 (1)
- 盘阵 (2)
- AIX (1)
- 双机 (2)
- PM (0)
- weblogic (1)
- 测试 (1)
- 数字电视 (1)
- 手机 (0)
- linux (2)
- rhel5 (0)
- 主机配置 (0)
- 支付 (0)
- nginx (0)
- 负载均衡 (0)
- 旅游 (0)
- 签证 (0)
- httpsqs (0)
- 消息队列 (0)
- 防火墙 (0)
- iptables (0)
- selinux (1)
- apache (1)
- httpd (1)
- ssl (1)
- PCRE (1)
- glassfish (1)
- 优化 (0)
- web布局 (0)
- tiles (0)
- sitemesh (0)
- 网站 (0)
- 性能 (0)
- pv (0)
- struts2 (0)
- sprintmvc (0)
- 电子商务 (0)
- 架构 (0)
- 事务管理 (0)
- eclipse (1)
- 乱码 (1)
- svnserver (0)
- svnserve (0)
- 持续集成 (0)
- jenkins (0)
- 集群 (0)
- shell (0)
最新评论
-
sunchenjava:
目前公司正在做opennms的二次开发阶段,能否提供一些开发文 ...
OpenNMS:开放源码 NMS -
hhb19900618:
我问一下 ios平台有钓鱼的风险吗?
手机支付的思考(转载) -
cectsky:
写的不错!
groovy基础教程 -
Aaronlong31:
多谢了!不过文章 第8点 中的for循环范围貌似写错了吧!应该 ...
groovy基础教程 -
沙舟狼客:
看后豁然开朗!
groovy基础教程
A common use case for front-ending GlassFish with
However, up until now, support for Apache's
We have listened to the GlassFish user community and added the requested feature to the SJSAS 9.1 UR1 release. In
other words, with the upcoming SJSAS 9.1 UR1 release, it will be
possible to load-balance a cluster of GlassFish instances with Apache.
In order to support stickiness, Apache's loadbalancer relies on a
For example, if an HTTP session was generated on a cluster instance with a
The challenge we were facing when adding support for the
We've addressed this challenge by having the web container strip any
One of the side effects of this change has been that since a
The remainder of this blog covers important configuration aspects.
In order to load-balance a GlassFish cluster via Apache, follow these steps:
As soon as the cluster instance to which an HTTP session has been
sticky has failed, the loadbalancer will route any subsequent requests
for the same HTTP session to a different instance. This instance will
be able to load and resume the requested session using the in-memory
session replication feature that has been available since GlassFish V2.
The in-memory session replication feature is enabled only for those web applications that have been marked as How to Loadbalance GlassFish Cluster with Apache Loadbalancer
How to Loadbalance GlassFish Cluster with Apache Loadbalancer
Since GlassFish V1, it has been possible to front-end a GlassFish instance with Apache's httpd
web server, after following a few simple configuration steps
, which include defining the com.sun.enterprise.web.connector.enableJK
system property on the GlassFish instance, and specifying the port number of the mod_jk
listener on the GlassFish instance as its value. By specifying this system property, the mod_jk
connector, which comes standard with GlassFish (minus the JAR files
that need to be copied from a Tomcat installation as per the
configuration steps referenced above), will be started automatically
and will listen on the specified port to any traffic sent by the httpd
front-end over the AJP
protocol. (Please notice that when you follow the configuration steps referenced above, you must use the tomcat-ajp.jar
from Tomcat 5.5.23. Using the tomcat-ajp.jar
bundled with a more recent Tomcat release will not work.)
httpd
is to have httpd
serve any requests for static resources, while having any requests for
dynamic resources, such as servlets and JavaServer(TM) Pages (JSPs),
forwarded to, and handled by the GlassFish backend instance.
httpd
has been limited to a single GlassFish instance, and there has been great interest on the GlassFish user forum
in having an entire cluster of GlassFish instances load-balanced by
Apache, allowing users to transition from an Apache-loadbalanced
cluster of Tomcat instances to an Apache-loadbalanced cluster of
GlassFish instances and take advantage of the in-memory session
replication feature introduced in GlassFish V2
.
jvmRoute
to be included in any JSESSIONID received by it. The jvmRoute
,
which is separated from the session id via ".", and whose value is
configured via a system property of the same name, identifies the
cluster instance on which the HTTP session was generated, or on which
it was last resumed. This means that every GlassFish instance in a
cluster that is front-ended by Apache's loadbalancer must be configured
with a jvmRoute
system property whose value is unique within the cluster.
jvmRoute
system property equal to instance1
, the JSESSIONID returned to the client (via an HTTP cookie or URL rewriting) will contain the session id with the string .instance1
appended to it. A subsequent request that is intercepted by the Apache
loadbalancer will include the same JSESSIONID value that was returned
to the client, from whose jvmRoute
suffix the Apache
loadbalancer can determine the instance on which the HTTP session was
last served, and direct the request to it. Should that instance have
failed in the meantime, the Apache loadbalancer will select a different
instance from the remaining healthy instances, and have the request
failover to it. For example, if the request fails over to an instance
whose jvmRoute
system property is equal to instance2
, the response generated from that instance will include a JSESSIONID containing the session id with .instance2
(instead of .instance1
) appended to it.
jvmRoute
feature to GlassFish has been that while the Apache loadbalancer expects the jvmRoute
,
whose value may change over the lifetime of its associated HTTP
session, to be part of the JSESSIONID, we had to shield the session
management in GlassFish from the jvmRoute
, to preserve
the invariant (from the session management's perspective) that session
ids are immutable and remain constant over the lifetime of a session.
jvmRoute
off an incoming JSESSIONID (and use the remainder as the session id of the session to be resumed), and append a jvmRoute
to the session id when forming a JSESSIONID. Of course, we have the web container process a JSESSIONID in this way only if the jvmRoute
system property has been set.
jvmRoute
is dynamic, the web container now adds a JSESSIONID cookie to every
response, regardless of whether an HTTP session was created or resumed
by the corresponding request, provided that the jvmRoute
system property has been set.
jvmRoute
and com.sun.enterprise.web.connector.enableJK
system properties at the GlassFish cluster level. For example, in the case of a cluster named "cluster1", run these commands:
asadmin create-jvm-options --target cluster1 "-DjvmRoute=\${AJP_INSTANCE_NAME}"
asadmin create-jvm-options --target cluster1 "-Dcom.sun.enterprise.web.connector.enableJK=\${AJP_PORT}"
asadmin create-system-properties --target instance9 AJP_INSTANCE_NAME=instance9
asadmin create-system-properties --target instance9 AJP_PORT=8020
Notice how the port number (8020) specified for the mod_jk
connector on "instance9" matches the value of the corresponding worker.instance9.port
in the sample workers.properties
below.
mod_jk
connector, in Apache's workers.properties
configuration file. Make sure that the name of each worker
equals the value of the jvmRoute
system property of the GlassFish instance to which the worker
connects.
This convention makes it possible for an HTTP session to remain sticky
to the GlassFish instance on which the session was created, or on which
the session was last resumed.
workers.properties
configuration file is used to load-balance a 9-instance GlassFish
cluster, in which the instances are spread over three physical server
machines: my.domain1.com
, my.domain2.com
, and my.domain3.com
:
# Define 1 real worker using ajp13
worker.list=loadbalancer
# Set properties for instance1
worker.instance1.type=ajp13
worker.instance1.host=my.domain1.com
worker.instance1.port=8012
worker.instance1.lbfactor=50
worker.instance1.cachesize=10
worker.instance1.cache_timeout=600
worker.instance1.socket_keepalive=1
worker.instance1.socket_timeout=300
# Set properties for instance4
worker.instance4.type=ajp13
worker.instance4.host=my.domain1.com
worker.instance4.port=8015
worker.instance4.lbfactor=50
worker.instance4.cachesize=10
worker.instance4.cache_timeout=600
worker.instance4.socket_keepalive=1
worker.instance4.socket_timeout=300
# Set properties for instance7
worker.instance7.type=ajp13
worker.instance7.host=my.domain1.com
worker.instance7.port=8018
worker.instance7.lbfactor=50
worker.instance7.cachesize=10
worker.instance7.cache_timeout=600
worker.instance7.socket_keepalive=1
worker.instance7.socket_timeout=300
# Set properties for instance2
worker.instance2.type=ajp13
worker.instance2.host=my.domain2.com
worker.instance2.port=8013
worker.instance2.lbfactor=50
worker.instance2.cachesize=10
worker.instance2.cache_timeout=600
worker.instance2.socket_keepalive=1
worker.instance2.socket_timeout=300
# Set properties for instance5
worker.instance5.type=ajp13
worker.instance5.host=my.domain2.com
worker.instance5.port=8016
worker.instance5.lbfactor=50
worker.instance5.cachesize=10
worker.instance5.cache_timeout=600
worker.instance5.socket_keepalive=1
worker.instance5.socket_timeout=300
# Set properties for instance8
worker.instance8.type=ajp13
worker.instance8.host=my.domain2.com
worker.instance8.port=8019
worker.instance8.lbfactor=50
worker.instance8.cachesize=10
worker.instance8.cache_timeout=600
worker.instance8.socket_keepalive=1
worker.instance8.socket_timeout=300
# Set properties for instance3
worker.instance3.type=ajp13
worker.instance3.host=my.domain3.com
worker.instance3.port=8014
worker.instance3.lbfactor=50
worker.instance3.cachesize=10
worker.instance3.cache_timeout=600
worker.instance3.socket_keepalive=1
worker.instance3.socket_timeout=300
# Set properties for instance6
worker.instance6.type=ajp13
worker.instance6.host=my.domain3.com
worker.instance6.port=8017
worker.instance6.lbfactor=50
worker.instance6.cachesize=10
worker.instance6.cache_timeout=600
worker.instance6.socket_keepalive=1
worker.instance6.socket_timeout=300
# Set properties for instance9
worker.instance9.type=ajp13
worker.instance9.host=my.domain3.com
worker.instance9.port=8020
worker.instance9.lbfactor=50
worker.instance9.cachesize=10
worker.instance9.cache_timeout=600
worker.instance9.socket_keepalive=1
worker.instance9.socket_timeout=300
worker.loadbalancer.type=lb
worker.loadbalancer.balance_workers=instance1,instance2,instance3,instance4,instance5,instance6,instance7,instance8,instance9
loadbalancer
worker specified in your workers.properties
file from your httpd.conf
. The following snippet from httpd.conf
causes any JSP requests to be load-balanced over the GlassFish cluster configured in the above workers.properties
file:
JkWorkersFile workers.properties
# Loadbalance all JSP requests over GlassFish cluster
JkMount /*.jsp loadbalancer
distributable
in their web.xml
deployment descriptor, and that have been deployed to the cluster with the --availabilityenabled
option of the asadmin deploy
command set to true
(default is false
).
相关推荐
flex远程对象操作(简单对象到复杂对象) 服务器为glassfish 工作过程录制,语言为成都方言:),听不明白的只有光看了
apache与glassfish整合,包含apache的安装与配置文件配置与属性说明,以及一个简易demo
工作录制,语言为成都方言 主要实际操作Flex远程对象实现
工作录制,语言为成都方言 主要实际操作Flex远程对象实现
glassfish和ant安装与配置 在eclipse中的配置
LearningRoom jsp新手开发小案例;开发环境,netbeans+glassfish+postgresql+jsp(java8)
Glassfish 与 Apache 的整合 不多说自己看 会有收获的
GlassFish Server 是Java EE的开源参考实现
Java EE 7已经发布很久了,新增加了很多新的功能和特性,如新增或更新了不少的JSR标准。其中特别受到关注的是Websockets。它的一个好处之一是减少了不必要的网络流量。它主要是用于在客户机和服务器之间建立单一的...
NetBeans8 0 2+GlassFish4 1 或者WildFly 8 0 2 +MySQL5 6 23 也可以稳步到这里一坐:http: tryrefine iteye com blog 2184106">Java EE 7官司方例子 也是《Java EE 7 Essentials》(中文翻译《Java EE 7精粹》 ...
在安装了CentOS系统的情况下如何安装web服务器应用软件 如:apache、tomcat、glassfish、nfs、heatbeat等 本指南是本人在安装过程中进行整理的如有不详,或不清楚的地访请谅解,也可联系我
详细介绍glassfish安装过程。ant 工具的使用!!
glassfish集群搭建手册glassfish集群搭建手册glassfish集群搭建手册
GlassFish4的下载和安装的详细步骤,下载地址,配置等。附带有GlassFish3的安装简述
Glassfish 集群技术揭秘Glassfish 集群技术揭秘
Glassfish部署web项目
glassfish 安装配置文档 发布、eclipse调试
Glassfish中关于tomcat介绍 Glassfish中关于tomcat介绍 Glassfish中关于tomcat介绍 Glassfish中关于tomcat介绍
GlassFish 的安装和启动,中文的呀
GlassFish2.0的安装