Configurando Wildfly com Apache, load balance e clusterização no CentOS7

Esse artigo mostra uma proposição de um ambiente balanceado, clusterizado, full profile HA, usando Apache2 e Wildfly, usando o modo domain para propósitos de gestão.

Para balanceamento e cluster, usaremos o mod_cluster sob o Apache com todas as máquinas rodando sistema operacional CentOS7. Para esse propósito usaremos quatro servidores, distribuídos em:

  • server1: 01 servidor para Apache e mod_cluster;
  • server2: 01 servidor para o domain, Wildfly;
  • server3 e server4: 02 servidores com wildfly para compor o cluster.

Seguiremos uma sequencia de comandos para a configuração dos servidores eapplications server.

Verifique em todos os servidores…

A rede está funcionando? Veja qual IP de cada máquina com “ip addr” ou “ifconfig” e tente “pingar” ou conectar com ssh. O CentOS7 por default deixa a rede desalibilitada.

server1: Instalando Apache2 HTTP

Execute o comando abaixo para a instalação do Apache2:

sudo yum install httpd httpd-devel apr-devel openssl-devel mod_ssl -y

Após a instalação, inicie o serviço.

sudo service httpd start

Acesse no browser de sua máquina se o Apache HTTP Server está no ar, só acessar pelo IP: http://<IP_SERVER1>. Se nenhuma página de testes aparecer, então provavelmente você precisa liberar a porta 80 para o HTTP Server no firewall; veja aqui como configurar.

Baixar e instalar o mod_cluster.

wget http://downloads.jboss.org/mod_cluster//1.3.1.Final/linux-x86_64/mod_cluster-1.3.1.Final-linux2-x64-so.tar.gz

Depois a instalação.

tar -zxvf mod_cluster-1.3.1.Final-linux2-x64-so.tar.gz
sudo cp mod_advertise.so /etc/httpd/modules/
sudo cp mod_manager.so /etc/httpd/modules/
sudo cp mod_proxy_cluster.so /etc/httpd/modules/
sudo cp mod_cluster_slotmem.so /etc/httpd/modules/

Comente a linha do mod_proxy_balancer pois será usado o mod_cluster

cd /etc/httpd/conf.modules.d
vi 00-proxy.conf

Depois de comentada

#LoadModule proxy_balancer_module modules/mod_proxy_balancer.so

Criar e adicionar o conteúdo do arquivo de configuração do cluster, o mod_cluster.conf.

cd /etc/httpd/conf.d/
touch mod_cluster.conf
vi mod_cluster.conf

Adicione o texto abaixo no arquivo mod_cluster.conf

LoadModule cluster_slotmem_module modules/mod_cluster_slotmem.so
LoadModule manager_module modules/mod_manager.so
LoadModule proxy_cluster_module modules/mod_proxy_cluster.so
LoadModule advertise_module modules/mod_advertise.so

MemManagerFile /var/cache/mod_cluster

Maxcontext 100
Maxnode 100
Maxhost 100

<VirtualHost *:80>

	<Directory />
		Order deny,allow
		Allow from all
	</Directory>

	<Location /mod_cluster_manager>
		SetHandler mod_cluster-manager
		#Order deny,allow
		#Deny from all
		#Allow from all
		AuthType Basic
		AuthName "MCM"
		AuthUserFile /etc/httpd/modclusterpassword
		Require user admin
	</Location>

	KeepAliveTimeout 60
	MaxKeepAliveRequests 0
	ServerAdvertise Off
	EnableMCPMReceive Off

</VirtualHost>

Criar um usuário com senha para o mod_cluster com o singelo nome de “admin”.

sudo htpasswd -c /etc/httpd/modclusterpassword admin

Fazer uma reciclagem do Apache.

sudo service httpd stop
sudo service httpd start

Testar novamente no browser: http://<IP_SERVER1>

Testar se o mod_cluster foi corretamente instalado e está respondendo: http://<IP_SERVER1>/mod_cluster_manager

server2, server3 e server4: Instalando e configurando Java e Wildfly

Baixar, instalar e configurar o Java

wget --no-check-certificate --no-cookies --header "Cookie: oraclelicense=accept-securebackup-cookie" http://download.oracle.com/otn-pub/java/jdk/7u80-b15/jdk-7u80-linux-x64.rpm
sudo rpm -Uvh jdk-7u80-linux-x64.rpm
sudo alternatives --install /usr/bin/java java /usr/java/latest/jre/bin/java 200000
sudo alternatives --install /usr/bin/javaws javaws /usr/java/latest/jre/bin/javaws 200000
sudo alternatives --install /usr/bin/javac javac /usr/java/latest/bin/javac 200000
sudo alternatives --install /usr/bin/jar jar /usr/java/latest/bin/jar 200000

Baixar o Wildfly

wget http://download.jboss.org/wildfly/8.1.0.Final/wildfly-8.1.0.Final.tar.gz

Instalar Wildfly

tar xzf wildfly-8.1.0.Final.tar.gz
sudo mv wildfly-8.1.0.Final /opt
cd /opt
sudo ln -sf wildfly-8.1.0.Final/ wildfly

Configurar um usuário no Linux, o wildfly

sudo groupadd wildfly
sudo useradd -s /bin/bash -d /home/wildfly -m -g wildfly wildfly
sudo chown -R wildfly:wildfly /opt/wildfly-8.1.0.Final
sudo chown -h wildfly:wildfly /opt/wildfly

Alterar a permissão do usuário “wildfly” para administrador do Linux (cuidado!) acrescentando a linha logo abaixo.

sudo visudo
wildfly ALL=(ALL) NOPASSWD:ALL

Depois criar password e entrar como usuário “wildfly”

sudo passwd wildfly
su wildfly

Configurar o Wildfly

sudo cp /opt/wildfly/bin/init.d/wildfly.conf /etc/default/

Editar o arquivo de configuração e descomente as linhas abaixo

sudo vim /etc/default/wildfly.conf
## Location of WildFly
JBOSS_HOME="/opt/wildfly"

## The username who should own the process.
JBOSS_USER=wildfly

Configurar o Wildfly como um serviço

sudo cp /opt/wildfly/bin/init.d/wildfly-init-redhat.sh /etc/init.d/wildfly

Iniciar o Wildfly

sudo service wildfly start

Veja no log se não algum erro

more /var/log/wildfly/console.log

Parar o Wildfly

sudo service wildfly stop
server2: configurando o Wildfly Master Domain

Configurar o Wildfly Master

sudo vi /etc/default/wildfly.conf

Altere as linhas abaixo

JBOSS_MODE=domain
JBOSS_DOMAIN_CONFIG=domain.xml
JBOSS_HOST_CONFIG=host-master.xml
STARTUP_WAIT=30
SHUTDOWN_WAIT=30
JBOSS_CONSOLE_LOG=/var/log/wildfly/console.log

Faça login como usuário “wildfly” se já não estiver

su wildfly

Configurar  o parâmetro jboss.bind.address.management, adicionando junto com as outras linhas de JAVA_OPTS

vi /opt/wildfly/bin/domain.conf
JAVA_OPTS="$JAVA_OPTS -Djboss.bind.address.management=<IP_DA_MAQUINA>"

Configurar os nomes do domínio

vi /opt/wildfly/domain/configuration/domain.xml

De…

<server-groups>
 [...]
</server-groups>

Para…

<server-groups>
   <server-group name="arquitetura-grupo-1" profile="full-ha">
    <jvm name="default">
      <heap size="512m" max-size="512m"/>
      <permgen max-size="256m"/>
    </jvm>
    <socket-binding-group ref="full-ha-sockets"/>
   </server-group>
</server-groups>

Criar um usuário dentro do WildFly para comunicação no modo domain, usaremos depois; siga a sequência abaixo.

sh /opt/wildfly/bin/add-user.sh
[enter]
wuser
sapucaia@1
sapucaia@1
[enter]
yes
yes

Anotar o secret gerado após criar o usuáro pois será usado adiante

more /opt/wildfly/domain/configuration/host-slave.xml | grep secret
<secret value="c2xhdmVfdXNlcl9wYXNzd29yZA=="/>

Criar um usuário para acessar a web console

sh /opt/wildfly/bin/add-user.sh
[enter]
domainadmin
sapucaia@1
sapucaia@1
[enter]
yes
no
server3 e server4: configurando o Wildfly como host-slave

Configurar o Wildfly para modo host-slave

sudo vi /etc/default/wildfly.conf

Alterar as linhas abaixo

JBOSS_USER=wildfly
JBOSS_MODE=domain
JBOSS_HOST_CONFIG=host-slave.xml
STARTUP_WAIT=30
SHUTDOWN_WAIT=30
JBOSS_CONSOLE_LOG=/var/log/wildfly/console.log

Adicionar as linhas de JAVA_OPTS para o domain

vi /opt/wildfly/bin/domain.conf

 

JAVA_OPTS="$JAVA_OPTS -Djboss.domain.master.address=<IP MASTER>" 
JAVA_OPTS="$JAVA_OPTS -Djboss.bind.address=<IP MAQUINA>"

Configurar o slave

vi /opt/wildfly/domain/configuration/host-slave.xml

Adicionar name no <host (no server03 adicione host1, no server4 adicione host2)

<host name="host1-wildfly" xmlns="urn:jboss:domain:2.1">

Alterar o secret para o mesmo do Master Domain

<secret value="c2xhdmVfdXNlcl9wYXNzd29yZA=="/>

Adicionar o username

<domain-controller>
    <remote host="${jboss.domain.master.address}" username="wuser" port="${jboss.domain.master.port:9999}" security-realm="ManagementRealm"/>
</domain-controller>

Alterar o servers para: (no server03 adicionar arquitetura-1, no server04 adicionar arquitetura-2)

<servers>
   <server name="arquitetura-1" group="arquitetura-grupo-1"/>
</servers>

Conectando o Wildfly ao Apache Web Server

No Servidor server-domain-widfly edite o arquivo /opt/wildfly/domain/configuration/domain.xml. Busque pele profile <profile name=”arquitetura-full-ha”>. Dentro desse profile edite o Subsystem <subsystem xmlns=”urn:jboss:domain:modcluster:1.2″> deixando-o como abaixo:

<subsystem xmlns="urn:jboss:domain:modcluster:1.2">
  <mod-cluster-config advertise-socket="modcluster" proxy-list="<IP_MOD_CLUSTER>:80" advertise="false" sticky-session="true" load-balancing-group="arquitetura"  connector="ajp">
   <dynamic-load-provider>
     <load-metric type="cpu"/>
   </dynamic-load-provider>
  </mod-cluster-config>
</subsystem>

Observe que na tag proxy-list nós colocamos os Balancer(s) / Apache Web Servers. Acesse o mod cluster manager para visualizar as instâncias conectadas nos Balancers.

 

 

 

 

 

 

 

 

 

 

 

Multiple Instances of WildFly on Different Ports on Same Machine

WildFly can be started on the default port 8080 using:

./bin/standalone.sh

The default landing page is then accessible at localhost:8080 and looks like:

tt1-wildfly-welcome

The default admin console is accessible at localhost:9990/console and looks like:

tt8-admin-console

Do you want to start another WildFly standalone instance on the same machine on a different port ?

./bin/standalone.sh -Djboss.socket.binding.port-offset=1000

will start another standalone server on port 8080 + 1000. And so the landing page is now accessible at localhost:9080. Similarly, admin console is now accessible at localhost:10990/console.

Similarly, you can start multiple instances by specifying port offset.

Be Sociable, Share!

Changing Default HSQLDB to Use Database in Jboss 4.2.3 for JMS

As we know jboss uses HSQLDB for the jms persistence to modify this to persist the JMS messages to user Database like mysql, oracle, etc. Following changes as to be made in jboss 4.2.3. We assume the Postgres Database for this purpose.

1. Delete the hsqldb-ds.xml from JBOSS_HOME/server/[instance]/deploy folder.

2. Copy the respective database related ds file from JBOSS_HOME/docs/examples/jca/*-ds.xml file to deploy folder of your [instance].

3. Change the jndi-name in *-ds.xml file to “DefaultDS“.

4. Delete hsqldb-jdbc2-service.xml file from JBOSS_HOME/server/[instance]/jms folder.

5. Copy the respective database persitence manager service xml file *-jdbc2-service.xml from JBOSS_HOME/docs/examples/jms to JBOSS_HOME/server/[instance]/deploy/jms folder.

6. Change the jndi name in the *-jdbc2-service.xml to “DefaultDS“, jboss.jca:service=DataSourceBinding,name=DefaultDS

7. Rename the hsqldb-jdbc-state-service.xml to respective database name *-jdbc-state-service.xml, its optional you can keep the file as it is.

8. Copy the respective database connector jar file to /JBOSS_HOME/server/[instance]/lib folder.

Now the configuration is modified for the jms persistence to user database and data will persist to jms_message table only when the huge number of jms are generated and its a temporary storage once the jms message is consumed it will deleted automatically from the jms_message table.

JBoss 6.x Tuning/Slimming

Introduction

The following slimming recommendations are for a standard JBoss AS 6.0.0 final (Community) “All” configuration.

Slimming is very application specific, so this is by no means a universal document. If you have documented the process for slimming other services for JBoss 6.x please add to them here.

The slimming document for JBoss5.x http://community.jboss.org/wiki/JBoss5xTuningSlimming will be not complete out of date for JBoss 6.x, so you might look into.

Remove hornetQ JMS (Java Message Service)

In JBOSS_HOME/server/<node>/deploy/ remove:

  • hornetq
  • jms-ra.rar

In JBOSS_HOME/server/<node>/deployers/ remove:

  • hornetq

In JBOSS_HOME/common/lib remove (only if no server configuration use hornetq)

  • hornetq*

In JBOSS_HOME/server/<node>/conf/ remove useless configuration:

  • delete element <application-policy name=”hornetq”> from login-config.xml
  • remove props/hornetq-roles.properties
  • remove props/hornetq-users.properties

Turn off hot deployment

In JBOSS_HOME/server/<node>/deploy/ remove:

  • hdscanner-jboss-beans.xml

Remove Hypersonic DB

In JBOSS_HOME/server/<node>/deploy/ remove:

  • hsqldb-ds.xml

In JBOSS_HOME/common/lib remove (only if no server configuration use hornetq)

  • hsqldb.jar hsqldb-plugin.jar

The following services use the “DefaultDS” datasource for persistence:

  • JUDDI
  • UUID key generator
  • EJB2 timer

One option is to remove or adapt such services, the other option is to provide a datasource “DefaultDS” for another RDBMS.

Datasource examples will be located in docs/examples/jca.

EJB2 Timer service

To deactivate persistence for EJB2 timer replace:

 <mbean code="org.jboss.ejb.txtimer.DatabasePersistencePolicy" name="jboss.ejb:service=EJBTimerService,persistencePolicy=database">

    <!-- DataSourceBinding ObjectName -->
    <depends optional-attribute-name="DataSource">jboss.jca:service=DataSourceBinding,name=DefaultDS</depends>

    <!-- The plugin that handles database persistence -->
    <attribute name="DatabasePersistencePlugin">org.jboss.ejb.txtimer.GeneralPurposeDatabasePersistencePlugin</attribute>

    <!-- The timers table name -->
    <attribute name="TimersTable">TIMERS</attribute>
    <depends>jboss.jdbc:datasource=DefaultDS,service=metadata</depends>
</mbean>
...

<mbean code="org.jboss.ejb.txtimer.EJBTimerServiceImpl" ...
   <depends optional-attribute-name="PersistencePolicy">jboss.ejb:service=EJBTimerService,persistencePolicy=database</depends>

with:

<mbean code="org.jboss.ejb.txtimer.NoopPersistencePolicy" name="jboss.ejb:service=EJBTimerService,persistencePolicy=noop"/>

...

  <mbean code="org.jboss.ejb.txtimer.EJBTimerServiceImpl" ...
   <depends optional-attribute-name="PersistencePolicy">jboss.ejb:service=EJBTimerService,persistencePolicy=noop</depends>

Remove JUDDI

In JBOSS_HOME/server/<node>/deploy/ remove:

  • juddi-service.sar

Remove Key Generator

In JBOSS_HOME/server/<node>/deploy/ remove:

  • uuid-key-generator.sar

Remove Administration console

In JBOSS_HOME/common/deploy/ remove:

  • admin-console.war

In JBOSS_HOME/server/<node>/deploy/ remove:

  • admin-console-activator-jboss-beans.xml

Remove JMX console

https://community.jboss.org/message/734664#734664

In JBOSS_HOME/common/deploy/ remove:

  • jmx-console.war

In JBOSS_HOME/server/<node>/deploy/ remove:

  • jmx-console-activator-jboss-beans.xml

Remove JBoss Web Services console

In JBOSS_HOME/common/deploy/ remove:

  • jbossws-console.war

In JBOSS_HOME/server/<node>/deploy/ remove:

  • jbossws-console.war
  • jbossws-console-activator-jboss-beans.xml

Instalando JBossAS Tools para Eclipse Juno

Para a instalação do JbossAS Tools, onde temos as ferramentas para usar o JBoss 6 Server no Eclipse Juno, use o “Help->Install New Software“, cole no “work with:” a
seguinte URL http://download.jboss.org/jbosstools/updates/development/juno/ — aguarde um pouquinho, e escolha para instalar on itens conforme figura abaixo:

Abra o item “Abridged JBoss Tools 3.3“, e instale “Hibernate Tools“, “JBoss Archive Tools“,  “JBossAS Tools” e “JMX Console“…

Dica: Não basta fazer restart no Eclipse, tem que sair e depois abrir novamente.