您当前的位置: 首页 > 学无止境 > 心得笔记 网站首页心得笔记
Tomcat系列之apache负载均衡请求至tomcat及DeltaManager的使用
发布时间:2019-07-20 22:25:14编辑:雪饮阅读()
上回了解了Tomcat系列之apache使用mod_jk和mod_proxy反向代理tomcat,
这次基于上次来实现apache对后端tomcat的负载均衡、tomcat集群的session共享、nginx对tomcat的负载均衡
apache对tomcat的负载均衡
后端2号的配置
复制上回架构中的后端1号的server.xml并修改关键部分为如下:
<Engine name="Catalina" defaultHost="www.xynes.cn" jvmRoute="TomcatB">
然后准备jsp脚本
[root@localhost src]# mkdir -p /web/webapps
[root@localhost src]# cat /web/webapps/cs.jsp
<%@ page language="java" %>
<html>
<head><title>TomcatB</title></head>
<body>
<h1><font color="red">TomcatB </font></h1>
<table align="centre" border="1">
<tr>
<td>Session ID</td>
<% session.setAttribute("abc","abc"); %>
<td><%= session.getId() %></td>
</tr>
<tr>
<td>Created on</td>
<td><%= session.getCreationTime() %></td>
</tr>
</table>
</body>
</html>
前端配置
前端httpd重新将如下注释放开
Include conf/extra/httpd-proxy.conf
并将如下注释重新追加
Include conf/extra/httpd-proxy.conf
并配置该配置文件内容为:
[root@localhost bin]# cat /usr/local/httpd/conf/extra/httpd-proxy.conf
ProxyRequests Off
<proxy balancer://lbcluster1>
BalancerMember ajp://192.168.128.135:8009 loadfactor=1 route=TomcatA
BalancerMember ajp://192.168.128.136:8009 loadfactor=1 route=TomcatB
ProxySet lbmethod=byrequests
</Proxy>
<VirtualHost *:80>
ServerName localhost
ProxyVia On
ProxyPass / balancer://lbcluster1/
ProxyPassReverse / balancer://lbcluster1/
<Proxy *>
Require all granted
</Proxy>
<Location / >
Require all granted
</Location>
</VirtualHost>
至此则完成了apache对两个tomcat后端的负载均衡,测试每次强制刷新都是不同的后端tomcat服务器返回的结果
mod_proxy实现集群管理
在前端的httpd服务器的httpd-proxy配置文件中新增如下:
[root@localhost bin]# cat /usr/local/httpd/conf/extra/httpd-proxy.conf
ProxyRequests Off
<proxy balancer://lbcluster1>
BalancerMember ajp://192.168.128.135:8009 loadfactor=1 route=TomcatA
BalancerMember ajp://192.168.128.136:8009 loadfactor=1 route=TomcatB
ProxySet lbmethod=byrequests
</Proxy>
<VirtualHost *:80>
ServerName localhost
ProxyVia On
ProxyPass / balancer://lbcluster1/
ProxyPassReverse / balancer://lbcluster1/
<Location /balancer-manager>
SetHandler balancer-manager
Proxypass !
Require all granted
</Location>
<Proxy *>
Require all granted
</Proxy>
<Location / >
Require all granted
</Location>
</VirtualHost>
然后访问如:
即可实现集群管理
实现集群session共享
上面虽然实现了负载均衡,但是发现每次刷新后session都不能保存下来。
两台后端tomcat配置
[root@localhost bin]# mkdir /web/webapps/WEB-INF
[root@localhost bin]# cp /usr/local/tomcat/conf/web.xml /web/webapps/WEB-INF/
并在该web.xml中web-app段中添加如下配置:
<distributable/>
然后在server.xml中Engine段中添加如下:
<Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"
channelSendOptions="8">
<Manager className="org.apache.catalina.ha.session.DeltaManager"
expireSessionsOnShutdown="false"
notifyListenersOnReplication="true"/>
<Channel className="org.apache.catalina.tribes.group.GroupChannel">
<Membership className="org.apache.catalina.tribes.membership.McastService"
address="228.50.10.1" bind="192.168.128.135" port="45564"
frequency="500" dropTime="3000"/>
<Receiver className="org.apache.catalina.tribes.transport.nio.NioReceiver"
address="192.168.128.135" port="4000" autoBind="100"
selectorTimeout="5000" maxThreads="6"/>
<Sender className="org.apache.catalina.tribes.transport.ReplicationTransmitter">
<Transport className="org.apache.catalina.tribes.transport.nio.PooledParallelSender"/>
</Sender>
<Interceptor className="org.apache.catalina.tribes.group.interceptors.TcpFailureDetector"/>
<Interceptor className="org.apache.catalina.tribes.group.interceptors.MessageDispatch15Interceptor"/>
</Channel>
<Valve className="org.apache.catalina.ha.tcp.ReplicationValve"
filter=".*\.gif;.*\.js;.*\.jpg;.*\.htm;.*\.html;.*\.txt;"/>
<Valve className="org.apache.catalina.ha.session.JvmRouteBinderValve"/>
<Deployer className="org.apache.catalina.ha.deploy.FarmWarDeployer"
tempDir="/tmp/war-temp/" deployDir="/tmp/war-deploy/"
watchDir="/tmp/war-listen/" watchEnabled="false"/>
<ClusterListener className="org.apache.catalina.ha.session.JvmRouteSessionIDBinderListener"/>
<ClusterListener className="org.apache.catalina.ha.session.ClusterSessionListener"/>
</Cluster>
这里需要注意的是228.50.10.1是该集群的组播ip地址
测试
然后重启两个后端tomcat和前端的httpd再次不断用强刷测试发现负载均衡效果不变,但session得以保存了。
注意:如果有故障可以查看每个后端tomcat的catalina的日志信息,位于/usr/local/tomcat/logs
mod-jk情况下tomcat负载均衡及session共享的实现
上面虽然实现了tomcat的负载均衡和session集群共享,但是是基于mod-proxy的方式
下面以mod-jk的方式实现tomcat负载均衡及session共享,其中后端不用动,就用上面搭建好的。
前端配置
httpd.conf中重新开启httpd-jk.conf并关闭httpd-proxy.conf
然后配置httpd-jk.conf如下:
[root@localhost bin]# cat /usr/local/httpd/conf/extra/httpd-jk.conf
LoadModule jk_module modules/mod_jk.so
JkWorkersFile /usr/local/httpd/conf/extra/workers.properties
JkLogFile /usr/local/httpd/logs/mod_jk.log
JkLogLevel debug
JkMount /* lbcluster1
JkMount /jkstatus/ stat1
然后在httpd-jk.conf所关联的workers属性配置文件配置如:
[root@localhost bin]# cat /usr/local/httpd/conf/extra/workers.properties
worker.list = lbcluster1,stat1
worker.TomcatA.port = 8009
worker.TomcatA.host = 192.168.128.135
worker.TomcatA.type = ajp13
worker.TomcatA.lbfactor = 1
worker.TomcatB.port = 8009
worker.TomcatB.host = 192.168.128.136
worker.TomcatB.type = ajp13
worker.TomcatB.lbfactor = 1
worker.lbcluster1.type = lb
worker.lbcluster1.balance_workers = TomcatA, TomcatB
worker.lbcluster1.sticky_session = 0
worker.lbcluster1.method = R
worker.stat1.type = status
然后再次重启httpd服务进行测试,只要负载均衡效果还在,session也能保持就ok
集群管理
在mod-jk实现的apache为tomcat实现负载均衡及session共享后,则集群管理地址为:
http://192.168.2.172/jkstatus/
这里的192.168.2.172是前端apache的ip地址
nginx上实现对tomcat的负载均衡与session共享
tomcat后端还是用之前的配置,这里只单纯在前端配置nginx.conf即可
局部配置如:
upstream tomcatsrvs {
server 192.168.128.135:8080;
server 192.168.128.136:8080;
}
server {
listen 80;
server_name localhost;
#charset koi8-r;
#access_log logs/host.access.log main;
location / {
proxy_pass http://tomcatsrvs;
}
#error_page 404 /404.html;
# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
# proxy the PHP scripts to Apache listening on 127.0.0.1:80
#
#location ~ \.php$ {
# proxy_pass http://127.0.0.1;
#}
# pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
#
#location ~ \.php$ {
# root html;
# fastcgi_pass 127.0.0.1:9000;
# fastcgi_index index.php;
# fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name;
# include fastcgi_params;
#}
# deny access to .htaccess files, if Apache's document root
# concurs with nginx's one
#
#location ~ /\.ht {
# deny all;
#}
}
然后重启nginx服务进行测试,只要负载均衡效果还在,session也能保持就ok
关于集群管理中节点状态的总结
无论是mod-proxy或是mod-jk实现的apache为后端tomcat实现负载均衡加session共享
当后端某个节点挂掉后,发现集群管理中该节点对应的状态能通过刷新页面快速获取到该节点已经挂掉,并且对于前端已经不会向该节点上负载了。
而当挂掉的这个节点重新启动后,前端要等好久才能给该节点负载上,同样的集群管理中该节点的状态也要等好久后才会刷新出来其已上线的状态。
关键字词:tomcat,apache,nginx,session,负载均衡,状态,管理