Realizando particionamento de tabelas para melhora de performance das consultas do Zabbix 2.0.5 e PostgreSQL 9.0.1

Recentemente atualizei meu NMS de Zabbix 1.8.2 (Debian Squeeze) para o 2.0.5, um dos principais motivos que me levou a esta versão foi poder realizar o particionamento de tabelas, já que o desempenho no meu cenário (128 hosts monitorados, retenção de um ano e NVPS=110) estava começando a ficar sofrível em alguns momentos.

O propósito do particionamento de tabelas é dividir a tabela history do zabbix em intervalos pré-definidos, como diário, semanal ou mensal. A escolha do intervalo a ser utilizado deve se basear na quantidade de dados inserido no banco em determinado período.

Após o particionamento será criado um novo schema onde ficaram as tabelas contendo os dados separados pelo intervalo selecionado, isso facilita bastante a busca das informações na base.

Todos os passos são realizados no PostgreSQL, tornando a solução totalmente transparente ao Zabbix.

Criação do novo schema.


CREATE SCHEMA partitions
AUTHORIZATION zabbix;

Criar a função que cria as partições:


-- Function: trg_partition()

-- DROP FUNCTION trg_partition();

CREATE OR REPLACE FUNCTION trg_partition()
RETURNS TRIGGER AS
$BODY$
DECLARE
prefix text := 'partitions.';
timeformat text;
selector text;
_interval INTERVAL;
tablename text;
startdate text;
enddate text;
create_table_part text;
create_index_part text;
BEGIN

selector = TG_ARGV[0];

IF selector = 'day' THEN
timeformat := 'YYYY_MM_DD';
ELSIF selector = 'month' THEN
timeformat := 'YYYY_MM';
END IF;

_interval := '1 ' || selector;
tablename :=  TG_TABLE_NAME || '_p' || TO_CHAR(TO_TIMESTAMP(NEW.clock), timeformat);

EXECUTE 'INSERT INTO ' || prefix || quote_ident(tablename) || ' SELECT ($1).*' USING NEW;
RETURN NULL;

EXCEPTION
WHEN undefined_table THEN

startdate := EXTRACT(epoch FROM date_trunc(selector, TO_TIMESTAMP(NEW.clock)));
enddate := EXTRACT(epoch FROM date_trunc(selector, TO_TIMESTAMP(NEW.clock) + _interval ));

create_table_part:= 'CREATE TABLE IF NOT EXISTS '|| prefix || quote_ident(tablename) || ' (CHECK ((clock >= ' || quote_literal(startdate) || ' AND clock < ' || quote_literal(enddate) || '))) INHERITS ('|| TG_TABLE_NAME || ')';
create_index_part:= 'CREATE INDEX '|| quote_ident(tablename) || '_1 on ' || prefix || quote_ident(tablename) || '(itemid,clock)';

EXECUTE create_table_part;
EXECUTE create_index_part;

--insert it again
EXECUTE 'INSERT INTO ' || prefix || quote_ident(tablename) || ' SELECT ($1).*' USING NEW;
RETURN NULL;

END;
$BODY$
LANGUAGE plpgsql VOLATILE
COST 100;
ALTER FUNCTION trg_partition()
OWNER TO postgres;

Criar as triggers para cada tabela que se queira particionar e o intervalo de particionamento – diário ou mensal.


CREATE TRIGGER partition_trg BEFORE INSERT ON history           FOR EACH ROW EXECUTE PROCEDURE trg_partition('day');
CREATE TRIGGER partition_trg BEFORE INSERT ON history_sync      FOR EACH ROW EXECUTE PROCEDURE trg_partition('day');
CREATE TRIGGER partition_trg BEFORE INSERT ON history_uint      FOR EACH ROW EXECUTE PROCEDURE trg_partition('day');
CREATE TRIGGER partition_trg BEFORE INSERT ON history_str_sync  FOR EACH ROW EXECUTE PROCEDURE trg_partition('day');
CREATE TRIGGER partition_trg BEFORE INSERT ON history_log       FOR EACH ROW EXECUTE PROCEDURE trg_partition('day');
CREATE TRIGGER partition_trg BEFORE INSERT ON trends            FOR EACH ROW EXECUTE PROCEDURE trg_partition('month');
CREATE TRIGGER partition_trg BEFORE INSERT ON trends_uint       FOR EACH ROW EXECUTE PROCEDURE trg_partition('month');

Manutenção da retenção e remoção de partições antigas

A seguinte função é utilizada para remover partições antigas e poderá ser agendada para rodar via cron.

-- Function: delete_partitions(interval, text)-- DROP FUNCTION delete_partitions(interval, text);

CREATE OR REPLACE FUNCTION delete_partitions(intervaltodelete INTERVAL, tabletype text)
RETURNS text AS
$BODY$
DECLARE
result RECORD ;
prefix text := 'partitions.';
table_timestamp TIMESTAMP;
delete_before_date DATE;
tablename text;

BEGIN
FOR result IN SELECT * FROM pg_tables WHERE schemaname = 'partitions' LOOP

table_timestamp := TO_TIMESTAMP(substring(result.tablename FROM '[0-9_]*$'), 'YYYY_MM_DD');
delete_before_date := date_trunc('day', NOW() - intervalToDelete);
tablename := result.tablename;

-- Was it called properly?
IF tabletype != 'month' AND tabletype != 'day' THEN
RAISE EXCEPTION 'Please specify "month" or "day" instead of %', tabletype;
END IF;

--Check whether the table name has a day (YYYY_MM_DD) or month (YYYY_MM) format
IF LENGTH(substring(result.tablename FROM '[0-9_]*$')) = 10 AND tabletype = 'month' THEN
--This is a daily partition YYYY_MM_DD
-- RAISE NOTICE 'Skipping table % when trying to delete "%" partitions (%)', result.tablename, tabletype, length(substring(result.tablename from '[0-9_]*$'));
CONTINUE;
ELSIF LENGTH(substring(result.tablename FROM '[0-9_]*$')) = 7 AND tabletype = 'day' THEN
--this is a monthly partition
--RAISE NOTICE 'Skipping table % when trying to delete "%" partitions (%)', result.tablename, tabletype, length(substring(result.tablename from '[0-9_]*$'));
CONTINUE;
ELSE
--This is the correct table type. Go ahead and check if it needs to be deleted
--RAISE NOTICE 'Checking table %', result.tablename;
END IF;

IF table_timestamp <= delete_before_date THEN
RAISE NOTICE 'Deleting table %', quote_ident(tablename);
EXECUTE 'DROP TABLE ' || prefix || quote_ident(tablename) || ';';
END IF;
END LOOP;
RETURN 'OK';

END;

$BODY$
LANGUAGE plpgsql VOLATILE
COST 100;
ALTER FUNCTION delete_partitions(INTERVAL, text)
OWNER TO postgres;

Exemplos de chamada da função:


SELECT delete_partitions('7 days', 'day')
SELECT delete_partitions('11 months', 'month')

Autenticação de usuário SVN em um diretório OpenLDAP

Um pequeno guia para realizar autenticação os usuários do openldap no Subversion utilizando SASL.

Executado em um Debian GNU/Linux 6 amd64.

Pré-requisitos: subversion previamente instalado e um repositório já criado, caso precise de um guia para a instalação e criação dos repositórios no SVN recomendo este tutorial: http://longspine.com/how-to/install-apachesubversion-on-debian-lenny-and-migrate-the-repositories/

Pacotes necessários:


apt-get install db4.7-util sasl2-bin ldap-utils

Os passos a seguir são realizados levando em consideração um repositório previamente criado em /home/svn/myproject.

Vamos editar o arquivo svnserve.conf do nosso repositório (/home/svn/myproject/conf/svnserve.conf) como o mostrado a seguir:


[general]
anon-access = none
auth-access = write

realm = myproject

[sasl]
use-sasl = true

Atenção no parâmetro realm, que deve ser o nome do repositório.

Criaremos o seguinte arquivo de definições do svn para o saslauthd em /usr/lib/sasl2/svn.conf:


#/usr/lib/sasl2/svn.conf -- might be /usr/lib/sasl2/subversion.conf not sure, make both

## Password check method, default to the SASL AUTH daemon

pwcheck_method: saslauthd

## Auxiliary (propery) plugin, use ldap

auxprop_plugin: ldap

## Mechanism list, MS AD requires you to send credentials in plain text

mech_list: PLAIN LOGIN

## Not sure if this is required... but I kept it in

ldapdb_mech: PLAIN LOGIN

Agora vem a configuração do serviço saslauthd, edite o arquivo /etc/default/saslauthd e altere os seguintes parâmetros:


START=yes
MECHANISMS="ldap"

A mágica do negócio vai no próximo arquivo, que define como o SASL vai pesquisar o diretório em busca dos usuários. Antes vamos mostrar uma DIT de exemplo, e mostar como será feita a pesquisa.

dc=exemplo,dc=com
|-cn=admin
|
|-ou=people
|  |-uid=user1
|  |-uid=user2
|
|-ou=group
|  |-cn=myproject
|  |-cn=someAnotherProject

Os usuários pertencem à classe posixAccount e os grupos à classe posixGroup e como podemos ver, criamos um grupo para cada repositório do svn e atribuímos a eles os devidos usuários usando o atribudo memberUid como mostrado abaixo:

dn: cn=myproject,ou=group,dc=exemplo,dc=com
gidNumber: 2031
cn: myproject
objectClass: top
objectClass: posixGroup
memberUid: user1
memberUid: user2

Agora editaremos o principal arquivo de configuração do SASL, o /etc/saslauthd.conf, como mostrado a seguir:


## URL for the Active Directory
ldap_servers: ldap://"openldap server ip address":389

## Not sure why exactly, but yes doesnt work... so no.
ldap_use_sasl: no

## Bind DN (Distinguishing Name) of the user you want to bind to the AD
ldap_bind_dn: cn=admin,dc=exemplo,dc=com

## Password to the above user
ldap_password: openldap_password_goes_here

## Sends passwords as plain text to AD to authenticate
ldap_mech: PLAIN

## Auth Method = Bind as specified user, and search for users in the AD
ldap_auth_method: bind

## Filter for users. (user@example.com) sAMAccountName = user
ldap_filter: uid=%U
ldap_scope: sub
ldap_password_attr: userPassword
ldap_search_base: ou=people,dc=exemplo,dc=com

## Group Filter
ldap_group_match_method: filter
ldap_group_search_base: ou=group,dc=exemplo,dc=com
ldap_group_filter: (&(objectClass=posixGroup)(cn=%r)(memberUid=%u))

O SASL ira fazer a query definida na diretiva ldap_group_filter substituindo a variável %r pelo nome do realm, isto é,  o nome do repositório definido no arquivo svnserve.conf e a variável %u pelo nome de usuário passado no login.

Assim para realizar a autenticação um usuário deve passar seu login e senha corretos, e pertencer ao grupo ldap do repositório.

Apos isso iniciamos o saslauthd e reiniciamos o subversion:


service saslauthd start
service svn restart

Podemos acompanhar as consultas com um tail no arquivo /var/log/auth.log.
Outra dica é por o saslauthd em modo debug com o comando:


saslauthd -a ldap -d

Referências:
http://notesfromchechu.com/blog/subversionopenldap/

Implementando um sistema balançeador de carga HTTP com alta disponibilidade utilizando Nginx como proxy reverso, sincronização rsync, compartilhamento de sessões PHP e SSL

1. Introdução

Este artigo demonstra a configuração de um ambiante de alta disponibilidade para serviços web. Ele poderia ser dividido em vários artigos independentes abordando cada etapa e instalação de cada software, mas reuni todas as informações em um único artigo que aborda um ambiente completo de alta disponibilidade, balanceamento de carga e cache (proxy reverso) de um ambiente web multiplataforma, como Apache + PHP ou Apache + Tomcat por exemplo.

2. Softwares Utilizados

Utilizaremos o keepalived para fornecer failover de endereçamento IP  e checagem de estado dos servidores web. Assim se um dos nós ficar fora de operação, o keepalived, através da sua checagem de estado, cuidará de configurar os devidos endereços e rotas IP nos nós que assumirão o serviço anteriormente de outro servidor.
Já o Nginx (leia-se engine X) é um rápido e leve servidor web e proxy reverso. Sua arquitetura interna otimizada, permite servir centenas de conexões com um pequeno overhead de CPU e memória. Nesta configuração usaremos o Nginx como um proxy reverso, onde as requisições das páginas dos nossos servidores web chegarão primeiramente ao Nginx que por sua vez realizará o balanceamento de carga entre os servidores web Apache servirão as páginas solicitadas. No nosso cenário utilizaremos também o Nginx para servir o conteúdo estático dos sites (como imagens, arquivos pdf, css, js etc), já que ele é muito mais rápido que o apache fazendo este serviço.  Utilizaremos também a opção de fazer cache das páginas solicitadas, assim o Nginx pode responder uma página que já esteja em cache sem precisar abrir nenhuma conexão com os servidores Apache ou Tomcat, deixando-os livres pra processar o que realmente é necessário, no caso o conteúdo dinâmico como PHP, JSP etc.

As sessões são salvas por padrão nas mesmas máquinas que servem o PHP. O que implica que temos que garantir que nossos usuários sejam direcionados para o mesmo servidor web. Mas nosso balanceamento de carga dividirá equalitáriamente as requisições entre os servidores web, então uma sessão iniciada em um servidor não será reconhecida nos outros servidores membros do cluster. Para resolver esse problema utilizaremos o repcached, que é um patch para o memcached, que por sua vez é um sistema que permite realizar cache de quase todo tipo de objetos (como resultados de funções, resultados de consultas de bancos de dados etc) em memória RAM. Realizar cache em memória ao invés carregar dados de um banco de dados, pode aumentar significativamente a performance de sistemas PHP.

O repcached adiciona suporte à replicação dos dados do memcached entre os nós do cluster, assim uma sessão que seja armazenada no repcached de um nó estará disponível em qualquer outro nó do cluster, e se este nó inicial vier a ficar indisponível poderá resgatar os seus dados armazenados nos outros membros do cluster quando voltar a ficar on-line.

Assim sessões estarão ser compartilhadas entre todos os servidores que farão parte do nosso cluster, já que não temos como saber para qual servidor web o usuário será balanceado a cada solicitação.

Como nosso todos os nós do nosso cluster de alta disponibilidade web vai servir o mesmo conteúdo, isto é, as mesmas páginas, temos que estar certificados que eles estejam sincronizados entre si. A solução mais fácil seria utilizar um storage e um sistema de arquivos distribuído como ocfs2 ou gfs conectado aos servidores, mas como nem sempre temos um equipamento como esse disponível, recorremos a outras soluções.

Poderíamos usar um diretório compartilhado por NFS e montado em todos os servidores, mas continuaríamos assim com um gargalo que seria a alta taxa de I/O na rede.

Optamos por usar uma simples solução de espelhamento utilizando rsync nos diretórios publicados nos servidores web.Existem outras opções mais avançadas de espelhamento pela rede como o DRBB e sistemas de arquivos distribuídos como o  GlusterFS que exigiriam um artigo específico falando sobre eles, mas para o nosso ambiente o rsync é suficiente.

3. Cenário

Utilizaremos apenas dois servidores físicos utilizando Debian GNU/Linux 6 (squeeze) amd64 com uma instalação básica através da imagem netinst. Apesar de utilizar apenas dois servidores esta solução é bastante escalável podendo-se facilmente aumentar o número de nós do cluster de acordo com as necessidades.

As requisições aos websites servidos serão primeiramente recebidas no Nginx do Servidor A, denominado master, que realizará o balanceamento das requisições entre os Apache dos Servidores A e B (back end servers) conforme a imagem abaixo:

  • Servidor A: 10.10.10.1
  • Servidor B: 10.10.10.2
  • IP Virtual (VIP): 10.10.10.10
Cenário
Clique para ampliar

A fim de se evitar um Ponto Único de Falhas, (SPOF, Singe Point of Failure) realizaremos o chamado failover entre os processos Nginx, assim caso o Nginx  do Servidr A venha a falhar o Servidor B assumirá seu papel evitando que todo o sistema venha a ficar indisponível, mesmo estando os servidores web de pé e funcionando. Temos, portanto, dois níveis de Alta Disponibilidade, entre os balanceadores de carga (NginX) e entre os Servidores Web A e B (Apache), pois caso um deles fique indisponível as requisições serão direcionadas apenas ao outro servidor até que o servidor defeituoso volta a ficar on-line. A ilustração abaixo demonstra o cenário em possíveis falhas:

  • Falha 1 (Servidor B totalmente inoperante):
Falha 1
Clique para ampliar

Enquanto o Servidor B estiver indisponível, as requisições serão encaminhadas somente ao Servidor A.

  • Falha 2 (Processo Nginx do Servidor A com problemas):
Falha 2
Clique para ampliar

Aqui temos um problema com o processo balanceador de carga do Servidor A, porém o servidor web Apache está funcionando e pronto para receber requisições. O Keepalived reconhecerá que o Nginx do Servidor A caiu e o Servidor B ira atribuir para si o IP Virtual e tornara-se o balanceador de carga do sistema, encaminhando as requisições aos Apache dos Servidores A e B.

  • Falha 3 (Servidor A recupera-se da falha e volta ao estado de Master)
Falha 3
Clique para ampliar

O Servidor Web A tornase totalmente disponível novamente, tanto o processo balanceador de carga Nginx quando o servidor web Apache. Então o keepalived atribui novamente o IP Virtual para o Servidor A e este volta ao estado original do sistema, balanceando as requisições entre os Apache dos Servidores A e B.

4. Instalação

 4.1 Keepalived

Os seguintes comandos devem ser executados nos dois servidores membros do balanceamento de cargo, Servidores A e B.

Instalamos o keepalived pelo apt-get:


apt-get install keepalived

Após, temos que editar o sysctl.conf e ativar a opção net.ipv4.ip_nonlocal_bind para permitir que processos escutem em endereços IP ainda não atribuídos:


echo net.ipv4.ip_nonlocal_bind=1 >> /etc/sysctl.conf
sysctl -p

Daqui pra frente a configuração será dividida entre os Servidores A e B, já que possuem detalhes pertinentes a cada um.

Executaremos os seguintes comandos no Servidor A, que será denominado master no keepalived:

Criaremos o arquivo de configuração do keepalived em /etc/keepalived/keepalived.conf com o seguinte conteúdo:

vrrp_script chk_http_port {
  script "/usr/bin/killall -0 nginx"
  interval 2
  weight 2
}

vrrp_instance VI_1 {
  interface eth0
  state MASTER
  virtual_router_id 53
  priority 101     # 101 on master, 100 on backup
  authentication {
    auth_type PASS
    auth_pass som3_an0th3r_p4ss
}
track_script {
  chk_http_port
}
virtual_ipaddress {
  10.10.10.10/24 dev eth0
}

}

O que instrui o keepalived a ficar monitorando o status os processos nginx. Enquanto o processo estiver rodando, os endereços ip definidos na seção virtual_ipaddress (10.10.10.10)serão atribuídos ao servidor. Caso o processo venha a cair, ou o servidor inteiro ficar inoperante, o outro nó detectará e caso o seu processo nginx esteja de pé ele auto configurará o endereço ip virtual neste servidor, mantendo assim o serviço sempre de pé.

Os parâmetros virtual_router_ip e priority configura valores do protocolo de redundância vrrp. Deve-se setar o mesmo valor de virtual_router_id para todos
os nós participantes do balanceamento. Já o valor de priority deve ser o maior no servidor master, aquele que iniciará com o endereço ip virtual atribuído,
e valores inferiores a este nos servidores backups.

Agora configuraremos o keepalived do Servidor B.

Criaremos o arquivo de configuração do keepalived em /etc/keepalived/keepalived.conf com o seguinte conteúdo:

vrrp_script chk_http_port {
  script "/usr/bin/killall -0 nginx";
  interval 2
  weight 2
}

vrrp_instance VI_1 {
  interface eth0
  state MASTER
  virtual_router_id 53
  priority 100     # 101 on master, 100 on backup
  authentication {
    auth_type PASS
    auth_pass som3_an0th3r_p4ss
}
track_script {
  chk_http_port
}
virtual_ipaddress {
  10.10.10.10/24 dev eth0
}

}

Depois de salvos os arquivos, devemos iniciar o processo keepalived nos Servidores A e B:

service keepalived start

Após isso, mensagens da inicialização do keepalived poderão ser vistas nos logs, em especial em /var/log/messages.

Jan 18 10:50:34 servidorA Keepalived_vrrp: Registering Kernel netlink reflector
Jan 18 10:50:34 servidorA Keepalived_vrrp: Registering Kernel netlink command channel
Jan 18 10:50:34 servidorA Keepalived_vrrp: Registering gratutious ARP shared channel
Jan 18 10:50:34 servidorA Keepalived_vrrp: IPVS: Can't initialize ipvs: Protocol not available
Jan 18 10:50:34 servidorA Keepalived_vrrp: Opening file '/etc/keepalived/keepalived.conf'.
Jan 18 10:50:34 servidorA Keepalived_vrrp: Configuration is using : 62094 Bytes
Jan 18 10:50:34 servidorA Keepalived_vrrp: Using LinkWatch kernel netlink reflector...
Jan 18 10:50:34 servidorA Keepalived_healthcheckers: IPVS: Can't initialize ipvs: Protocol not available
Jan 18 10:50:34 servidorA Keepalived_healthcheckers: Registering Kernel netlink reflector
Jan 18 10:50:34 servidorA Keepalived_healthcheckers: Registering Kernel netlink command channel
Jan 18 10:50:34 servidorA Keepalived_healthcheckers: Opening file '/etc/keepalived/keepalived.conf'.
Jan 18 10:50:34 servidorA Keepalived_healthcheckers: Configuration is using : 4249 Bytes
Jan 18 10:50:34 servidorA Keepalived_healthcheckers: Using LinkWatch kernel netlink reflector...
Jan 18 10:50:35 servidorA Keepalived_vrrp: VRRP_Instance(VI_1) Transition to MASTER STATE
Jan 18 10:50:36 servidorA Keepalived_vrrp: VRRP_Instance(VI_1) Entering MASTER STATE

Verificaremos se o endereço ip virtual foi atribuído ao servidor master do balanceamento com sucesso:

ip addr show dev eth0

O que no Servidor Web A retorna:

root@servidorA:~# ip addr show dev eth0
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 1000
link/ether 54:52:00:07:9c:2e brd ff:ff:ff:ff:ff:ff
inet 10.10.10.1/24 brd 10.10.10.255 scope global eth0
inet 10.10.10.10/24 scope global secondary eth0
inet6 fe80::5652:ff:fe07:9c2e/64 scope link
valid_lft forever preferred_lft forever

Já oServidor Web B retorna:

root@servidorB:~# ip addr show dev eth0
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 1000
link/ether 54:52:00:00:00:ac brd ff:ff:ff:ff:ff:ff
inet 10.10.10.2/24 brd 10.10.10.255 scope global eth0
inet6 fe80::5652:ff:fe00:ac/64 scope link
valid_lft forever preferred_lft forever

Como podemos ver o endereço IP Virtual (VIP) está atribuído ao Servidor Web A, que é o servidor master do keepalived pois possui um valor da diretiva priority maior.

Podemos testar se o failover está realmente funcionando parando o serviço keepalived no Servidor Web A. Neste caso o Servidor Web B identificará o problema e atribuirá para si o endereço IP Virtual.

Paramos o keepalived no Servidor Web A:

service keepalived stop

Após poucos segundos, verificaremos os endereços IP atribuídos ao Servidor Web B:


root@servidorB:~# ip addr show dev eth0
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 1000
link/ether 54:52:00:00:00:ac brd ff:ff:ff:ff:ff:ff
inet 10.10.10.2/24 brd 10.10.10.255 scope global eth0
inet 10.10.10.10/24 scope global secondary eth0
inet6 fe80::5652:ff:fe00:ac/64 scope link
valid_lft forever preferred_lft forever

Como vimos o endereço IP foi atribuído com sucesso ao Servidor B, garantindo failover IP.

4.2 Apache

Para exemplificar, iremos hospedar um domínio no nosso sistema com as seguintes características:

Endereço DNS: www.exemplo.com
Endereço IP: 10.10.10.10

Como vemos, os nossos sites precisam resolver para o endereço IP Virtual, que por sua vez balanceará as solicitações entre os Servidores Web.

Instalaremos o Apache com apenas as configurações necessárias para o nosso sistema funcionar, opções de hardenização e segurança não serão abordadas aqui.

Os comandos desta seção devem ser executados em ambos os Servidores Web A e B:

apt-get install apache2 php5

Para que o Apache possa logar corretamente o IP do cliente que será encaminhado pelo Nginx, devemos localizar e substituir as seguintes linhas de definições de log no arquivo /etc/apache2/apache2.conf:

LogFormat "%v:%p %h %l %u %t "%r" %>s %b "%{Referer}i" "%{User-Agent}i"" vhost_combined
LogFormat "%h %l %u %t "%r" %>s %b "%{Referer}i" "%{User-Agent}i"" combined

por

#LogFormat "%v:%p %h %l %u %t "%r" %>s %b "%{Referer}i" "%{User-Agent}i"" vhost_combined
LogFormat "%v %{X-Forwarded-For}i %l %u %t "%r" %>s %b "%{Referer}i" "%{User-Agent}i"" vhost_combined
#LogFormat "%h %l %u %t "%r" %>s %b "%{Referer}i" "%{User-Agent}i"" combined
LogFormat "%{X-Forwarded-For}i %l %u %t "%r" %>s %b "%{Referer}i" "%{User-Agent}i"" combined

Agora criaremos o arquivo de definições do VirtualHost  do nosso domínio em /etc/apache2/sites-available/www.exemplo.com


<VirtualHost *:81>
DocumentRoot "/var/www"
ServerName www.exemplo.com
ErrorLog  "/var/log/apache2/www.exemplo.com.error_log"
CustomLog "/var/log/apache2/www.exemplo.com.access_log" common

</VirtualHost>

Agora ativamos o VirtualHost criado:

a2ensite www.exemplo.com

No arquivo /etc/apache2/ports.conf localizaremos as seguintes diretivas que configuram as portas nas quais o apache escuta:

NameVirtualHost *:80
Listen 80

e substiruiremos por:

NameVirtualHost *:81
Listen 81

O mesmo deve ser feito para o arquivo do VirtualHost padrão, caso este esteja ativado, localizado em /etc/apache2/sites-available/default:

Substituir:

<VirtualHost *:80>

por:

<VirtualHost *:81>

Conferimos se há algum erro de configuração:


root@servidorA:~# apache2ctl configtest
Syntax OK

Agora reiniciamos o Apache:


apache2ctl restart

4.3 Nginx

Os comandos desta seção devem ser executados em ambos os Servidores Web A e B:

Instalamos o pacote via apt-get:

apt-get install nginx

Utilizaremos o seguinte arquivo de configuração /etc/nginx/nginx.conf:

user www-data;
worker_processes  2;

error_log  /var/log/nginx/error.log;
pid        /var/run/nginx.pid;

events {
 worker_connections  1024;
 use epoll;
 # multi_accept on;
 }

http {
 include       /etc/nginx/mime.types;
 default_type application/octet-stream;
 access_log  /var/log/nginx/access.log;
 gzip_disable "MSIE [1-6].(?!.*SV1)";

include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
server_names_hash_bucket_size 33;
}

Em seguida criaremos o arquivo /etc/nginx/conf.d/options.conf:

# Size Limits
client_body_buffer_size         128K;
client_header_buffer_size       128K;
client_max_body_size            50M;    # php's upload_max_filesize
large_client_header_buffers     8 8k;
proxy_buffers                   8 16k;
proxy_buffer_size               32k;

# Timeouts
client_body_timeout             60;
client_header_timeout           60;
expires                         off;
keepalive_timeout               60 60;
send_timeout                    60;

# General Options
ignore_invalid_headers          on;
keepalive_requests              100;
limit_zone gulag $binary_remote_addr 5m;
recursive_error_pages           on;
sendfile                        on;
server_name_in_redirect         off;
server_tokens                   off;

# TCP options
tcp_nodelay                     on;
tcp_nopush                      on;

# Compression
gzip                            on;
gzip_buffers                    16 8k;
gzip_comp_level                 6;
gzip_http_version               1.0;
gzip_min_length                 0;
gzip_types                      text/plain text/css image/x-icon application/x-perl application/x-httpd-cgi;
gzip_vary                       on;

# Log Format
log_format                      main    '$remote_addr $host $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" "$http_user_agent" '
'"$gzip_ratio"';
proxy_cache_path                /var/cache/nginx/ levels=1:2 keys_zone=cache:100m inactive=1h max_size=1024M;

Na sequência criaremos o arquivo /etc/nginx/conf.d/proxy.conf:

proxy_cache_valid     1h; # 200, 301 and 302 will be cached.
proxy_cache_use_stale error
    timeout
    invalid_header
    http_500
    http_502
    http_504
    http_404;

proxy_buffering           on;
proxy_cache_min_uses       3;
proxy_ignore_client_abort off;
proxy_intercept_errors    on;
proxy_next_upstream       error timeout invalid_header;
proxy_redirect            off;
proxy_set_header          X-Forwarded-For $remote_addr;
proxy_connect_timeout     600;
proxy_send_timeout        600;
proxy_read_timeout        600;
proxy_ignore_headers      Expires Cache-Control;
proxy_cache_key          "$scheme$host$uri$is_args$args";

E o arquivo /etc/nginx/conf.d/upstream.conf:

upstream lb {
    server 10.10.10.1:81 max_fails=10 fail_timeout=300s;  # Web Server A
    server 10.10.10.2:81 max_fails=10 fail_timeout=300s;  # Web Server B
}

Crie o  arquivo de definições do domínio no Nginx /etc/nginx/sites-available/www.exemplo.com com o seguinte conteúdo:

server {
  listen   10.10.10.10:80;
  server_name  www.exemplo.com;
  access_log  /var/log/nginx/www.exemplo.com.access.log;

  location / {
    proxy_pass              http://lb;
    proxy_set_header        Host $host;
    proxy_set_header        X-Real-IP $remote_addr;
    proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;
  }
}

Agora, uma explicação mais detalhada dos principais parâmetros setados nestes arquivos de configuração:

  • user www-data; Define com qual usuário o processo nginx será iniciado.
  • worker_processes 2; Define o número de threads do nginx. Geralmente seta-se este valor com o número de núcleos da CPU do servidor. É sempre importante manipular esse valor fazendo um benchmark de conexões.
  • worker_connections 1024; Número máximo de conexões por worker. Para saber o número de conexões do servidor multiplique: max_clients = worker_processes * worker_connections
  • proxy_cache_valid 1h; Período em que um hit pode ficar no cache. Caso um objeto entre no cache, ele passará 1h marcado como válido, e o servidor o usará quando tal objeto for requisitado. Após expirado esse tempo o objeto sai do cache e novas requisições por ele serão realizadas diretamente aos servidores backend.
  • upstream lb; Define um conjunto de servidores e portas para onde serão encaminhadas as requisições que não estiverem em cache. O nginx balanceará as requisições de forma equalitária entre os servidores usando o método round-robin.
  • listen 10.10.10.10:80; Endereço e porta onde o Nginx escutará por conexões.
  • server_name www.exemplo.com; Endereço DNS do VirtualHost.
  • proxy_pass http://lb; Encaminha as requisições ao upstream lb definido no arquivo upstream.conf.

Agora criamos o link que ativa o VirtualHost para o diretório /etc/nginx/sites-enabled:

cd /etc/nginx/sites-enabled/
ln -s ../sites-available/www.exemplo.com.br

Criamos também o diretório onde será armazenado o cache do Nginx e setamos suas permissões:

mkdir /var/cache/nginx/
chown -R www-data:www-data /var/cache/nginx/

Vamos testar a configuração do Nginx antes de inicia-lo, caso exista algum erro nos arquivos de configuração este teste acusará:

root@servidorA:~# nginx -t
the configuration file /etc/nginx/nginx.conf syntax is ok
configuration file /etc/nginx/nginx.conf test is successful

Agora, podemos iniciar o serviço Nginx nos Servidores A e B:

service nginx start

Agora temos o nosso cluster de alta disponibilidade parcialmente completo, mas já podemos testar o FailOver IP e a distribuição das conexões entre os backends.

Criaremos um arquivo dentro do diretório /var/www de cada servidor com o hostname do servidor:

root@servidorA:~# hostname -f >> /var/www/index.html

Agora iremos abrir no Navegador o endereço do site que acabamos de configurar e pressionando o botão Atualizar por várias vezes, poderemos ver que ele o sistema está balanceando as conexões entre os dois servidores web.

4.4 Repcached

Utilizaremos o repcached para replicar as sessões dos sistemas que rodarão nos dois servidores, assim usuários logados em um servidor não perderão suas sessões quando balanceados para o outro servidor.

Os comandos a seguir devem ser executados em ambos Servidores A e B.

Temos que editar os seguintes valores na seção Session do arquivo /etc/php5/apache2/php.ini:

[Session]
 session.save_handler = memcache
 session.save_path = "tcp://10.10.10.1:11211, tcp://10.10.10.2:11211"

Assim dizemos ao php para gravar as seções no memcache, e informamos os endereços IP dos servidores que farão parte da replicação.

Editamos também o arquivo /etc/php5/conf.d/memcache.ini:


memcache.maxratio=0
memcache.allow_failover=1

Agora procedemos com a instalação do memcached:

apt-get install memcached

Após isso devemos baixar o código-fonte do repcached, compilar e instala-lo no servidor. Infelizmente o repcached ainda não é empacotado no repositório oficial do Debian.


cd /usr/src/
wget "http://ufpr.dl.sourceforge.net/project/repcached/repcached/2.2-1.2.8/memcached-1.2.8-repcached-2.2.tar.gz"
tar zxvf  memcached-1.2.8-repcached-2.2.tar.gz
cd memcached-1.2.8-repcached-2.2
./configure --enable-replication
make
make install

No Servidor A usaremos o seguinte arquivo de configuração /etc/repcached.conf:

# repcached config file
# 2011 - jean caffou

# Run repcached as a daemon. This command is implied, and is not needed for the
# daemon to run. See the README.Debian that comes with this package for more
# information.
-d

# Log repcached's output to /var/log/repcached
logfile /var/log/repcached.log

# Be verbose
# -v

# Be even more verbose (print client commands as well)
# -vv

# Start with a cap of 64 megs of memory. It's reasonable, and the daemon default
# Note that the daemon will grow to this size, but does not start out holding this much
# memory
-m 64

# Default connection port is 11211
-p 11211

# Run the daemon as root. The start-repcached will default to running as root if no
# -u command is present in this config file
-u nobody

# Specify which IP address to listen on. The default is to listen on all IP addresses
# This parameter is one of the only security measures that repcached has, so make sure
# it's listening on a firewalled interface.
# -l 127.0.0.1

# Limit the number of simultaneous incoming connections. The daemon default is 1024
# -c 1024

# Lock down all paged memory. Consult with the README and homepage before you do this
# -k

# Return error when memory is exhausted (rather than removing items)
# -M

# Maximize core file limit
# -r

# Port for server replication. Default is 11212
-X 11212

# IP for repcached peer server
-x 10.10.10.2

E o seguinte no Servidor B:

# repcached config file
# 2011 - jean caffou

# Run repcached as a daemon. This command is implied, and is not needed for the
# daemon to run. See the README.Debian that comes with this package for more
# information.
-d

# Log repcached's output to /var/log/repcached
logfile /var/log/repcached.log

# Be verbose
# -v

# Be even more verbose (print client commands as well)
# -vv

# Start with a cap of 64 megs of memory. It's reasonable, and the daemon default
# Note that the daemon will grow to this size, but does not start out holding this much
# memory
-m 64

# Default connection port is 11211
-p 11211

# Run the daemon as root. The start-repcached will default to running as root if no
# -u command is present in this config file
-u nobody

# Specify which IP address to listen on. The default is to listen on all IP addresses
# This parameter is one of the only security measures that repcached has, so make sure
# it's listening on a firewalled interface.
# -l 127.0.0.1

# Limit the number of simultaneous incoming connections. The daemon default is 1024
# -c 1024

# Lock down all paged memory. Consult with the README and homepage before you do this
# -k

# Return error when memory is exhausted (rather than removing items)
# -M

# Maximize core file limit
# -r

# Port for server replication. Default is 11212
-X 11212

# IP for repcached peer server
-x 10.10.10.1

Como podemos observar no final destes arquivos é informado o endereço IP do outro nó que vai rodar o repcached. Assim o Servidor A informa no seu arquivo de configuração o endereço IP do Servidor e vice-versa. Aproveitaremos alguns arquivos da instalação do memcached para usarmos no repcached:

cp /etc/default/memcached /etc/default/repcached

Em seguida editamos o arquivo /etc/default/repcached e alteramos a seguinte linha de:

ENABLE_MEMCACHED=no

para

ENABLE_REPCACHED=YES

E desativamos a inicialização do memcached editando o arquivo /etc/default/memcached de:

ENABLE_MEMCACHED=YES

para

ENABLE_MEMCACHED=no

Utilizaremos o seguinte script rc para o repcached, localizado em /etc/init.d/repcached:

#! /bin/sh
### BEGIN INIT INFO
# Provides:             repcached
# Required-Start:       $remote_fs $syslog
# Required-Stop:        $remote_fs $syslog
# Should-Start:         $local_fs
# Should-Stop:          $local_fs
# Default-Start:        2 3 4 5
# Default-Stop:         0 1 6
# Short-Description:    Start repcached daemon
# Description:          Start up repcached, a high-performance memory caching daemon with replication
### END INIT INFO

PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
DAEMON=/usr/local/bin/memcached
DAEMONBOOTSTRAP=/usr/share/memcached/scripts/start-repcached
NAME=repcached
DESC=repcached
PIDFILE=/var/run/$NAME.pid

test -x $DAEMON || exit 0
test -x $DAEMONBOOTSTRAP || exit 0

set -e

. /lib/lsb/init-functions

# Edit /etc/default/repcached to change this.
ENABLE_REPCACHED=no
test -r /etc/default/repcached && . /etc/default/repcached

case "$1" in
  start)
        echo -n "Starting $DESC: "
  if [ $ENABLE_REPCACHED = yes ]; then
        start-stop-daemon --start --quiet --exec $DAEMONBOOTSTRAP
        echo "$NAME."
        else
                echo "$NAME disabled in /etc/default/repcached."
        fi
        ;;
  stop)
        echo -n "Stopping $DESC: "
        start-stop-daemon --stop --quiet --oknodo --pidfile $PIDFILE --exec $DAEMON
        echo "$NAME."
        rm -f $PIDFILE
        ;;

  restart|force-reload)
        #
        #       If the "reload" option is implemented, move the "force-reload"
        #       option to the "reload" entry above. If not, "force-reload" is
        #       just the same as "restart".
        #
        echo -n "Restarting $DESC: "
        start-stop-daemon --stop --quiet --oknodo --pidfile $PIDFILE
        rm -f $PIDFILE
        sleep 1
        start-stop-daemon --start --quiet --exec $DAEMONBOOTSTRAP
        echo "$NAME."
        ;;
  status)
        status_of_proc $DAEMON $NAME
        ;;
  *)
        N=/etc/init.d/$NAME
        echo "Usage: $N {start|stop|restart|force-reload|status}" >&2
        exit 1
        ;;
esac

exit 0

Substituiremos também o arquivo /usr/share/memcached/scripts/start-repcached pelo o seguinte:


#!/usr/bin/perl -w

# start-repcached
# 2011 - Jean Caffou <jean@briskula.si>
# This script handles the parsing of the /etc/repcached.conf file
# and was originally created for the Debian distribution.
# Anyone may use this little script under the same terms as
# memcached itself.

use strict;

if($> != 0 and $< != 0)
{
    print STDERR "Only root wants to run start-repcached.n";
    exit;
}

my $params; my $etchandle; my $etcfile = "/etc/repcached.conf";

# This script assumes that repcached is located at /usr/local/bin/memcached, and
# that the pidfile is writable at /var/run/repcached.pid

my $memcached = "/usr/local/bin/memcached";
my $pidfile = "/var/run/repcached.pid";

# If we don't get a valid logfile parameter in the /etc/repcached.conf file,
# we'll just throw away all of our in-daemon output.
my $fd_reopened = "/dev/null";

sub handle_logfile
{
    my ($logfile) = @_;
    $fd_reopened = $logfile;
}

sub reopen_logfile
{
    my ($logfile) = @_;

    open *STDERR, ">>$logfile";
    open *STDOUT, ">>$logfile";
    open *STDIN, ">>/dev/null";
    $fd_reopened = $logfile;
}

# This is set up in place here to support other non -[a-z] directives

my $conf_directives = {
    "logfile" => &handle_logfile,
};

if(open $etchandle, $etcfile)
{
    foreach my $line (<$etchandle>)
    {
        $line ||= "";
        $line =~ s/#.*//g;
        $line =~ s/s+$//g;
        $line =~ s/^s+//g;
        next unless $line;
        next if $line =~ /^-[dh]/;

        if($line =~ /^[^-]/)
        {
            my ($directive, $arg) = $line =~ /^(.*?)s+(.*)/;
            $conf_directives->{$directive}->($arg);
            next;
        }

        push @$params, $line;
    }

}else{
    $params = [];
}

push @$params, "-u root" unless(grep "-u", @$params);
$params = join " ", @$params;

if(-e $pidfile)
{
    open PIDHANDLE, "$pidfile";
    my $localpid = <PIDHANDLE>;
    close PIDHANDLE;

    chomp $localpid;
    if(-d "/proc/$localpid")
    {
        print STDERR "repcached is already running.n";
        exit;
    }else{
        `rm -f $localpid`;
    }

}

my $pid = fork();

if($pid == 0)
{
    reopen_logfile($fd_reopened);
    exec "$memcached $params";
    exit(0);

}else{
    if(open PIDHANDLE,">$pidfile")
    {
        print PIDHANDLE $pid;
        close PIDHANDLE;
    }else{

        print STDERR "Can't write pidfile to $pidfile.n";
    }
}

Em seguida configuramos a inicialização do repcached durante o boot do sistema:


chmod +x /etc/init.d/repcached
update-rc.d repcached defaults

Iniciamos o repcached:

service repcached start

Agora vamos fazer um teste para estarmos seguros de que a replicação de sessão esteja funcionando corretamente:

Primeiro, no Servidor A iremos escrever uma chave chamada “mykey” com o valor “12345” e recuperaremos seu valor no Servidor B:

[user@host ~]$telnet 10.10.10.1 11211
Trying 10.10.10.1...
Connected to 10.10.10.1
Escape character is '^]'.
set mykey 1 600 5
12345
STORED
get mykey
VALUE mykey 1 5
12345
END

Após isso, a chave deve ter sido replicada para o Servidor B, onde recuperaremos seu valor:

[user@host ~]$telnet 10.10.10.2 11112
Trying 10.10.10.2...
Connected to 10.10.10.2
Escape character is '^]'.
get mykey
VALUE mykey 1 5
12345
END

Como vimos o valor do objeto mykey foi setado no Servidor A, replicado e recuperado no Servidor B. Agora temos nossa configuração do repcached funcionando perfeitamente.

4.5 Rsync Mirror

Iremos configurar o rsync para sincronizar o conteúdo do diretório /var/www do Servidor A para o Servidor B, assim deveremos copiar os arquivos para os sites no Servidor A que automaticamente os copiará para o Servidor B. Portanto serviços como FTP, usados para atualizar os dados dos websites servidos, devem ser instalados no Servidor A.

apt-get install rsync

Em seguida criaremos um usuário rsync nos dois hosts e configuraremos autenticação por chaves RSA no ssh, para não haver a pergunta de senha a cada vez que o comando for executado:

No Servidor B:


root@servidorB:~# useradd rsync -c "Rsync User" -d /var/www/ -s /bin/false
root@servidorB:~# mkdir ~rsync/.ssh
root@servidorB:~# chown rsync:root /var/www /var/www/.ssh
root@servidorB:~# chmod 0755 /var/www
root@servidorB:~# ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
6f:5b:a1:6b:e9:f6:78:4e:a1:9f:f4:86:79:73:cd:46 root@marina
The key's randomart image is:
+--[ RSA 2048]----+
|                 |
|                 |
|                 |
|                 |
|        S   o    |
|         . o o  E|
|          =.+o o.|
|         .+O+o+ =|
|         +=+=o.+ |
+-----------------+

No Servidor A:

root@servidorA:~# useradd rsync -c "Rsync User" -d /var/www/ -s /bin/bash
root@servidorA:~# chown rsync:root /var/www
root@servidorA:~# chmod 0755 /var/www

Agora copiamos o conteúdo da chave recém-criada do Servidor B para o Servidor A conforme os passos abaixo:

Servidor B:


root@servidorB:~# scp /root/.ssh/id_rsa.pub root@10.10.10.1:~rsync/.ssh/authorized_keys

E setamos as devidas permissões dos arquivos criados no Servidor A:


root@servidorA:~# chown -R rsync:rsync ~/rsync/.ssh/
root@servidorA:~# chmod 0600 ~/rsync/.ssh/authorized_keys

Agora criaremos um diretório no Servidor A, e em seguida executaremos o rsync no Servidor B, que deverá sincronizar os dois diretórios:

Servidor A:


root@servidorA:~# mkdir /var/www/rsync-test

Servidor B:


root@servidorB:~# rsync -av --delete rsync@10.10.10.1/var/www/ /var/www/

Se o rsync copiou o diretório criado no Servidor A para o Servidor B, nossa configuração está correta. Basta apenas adicionar o comando ao cron para ser executado de minuto a minuto para manter os diretórios o mais sincronizados possíveis.

No Servidor A abrimos o arquivo crontab do usuário root com o seguinte comando:


crontab -e

E adicionamos a seguinte linha:


# rsync
*/1 * * * * rsync -a --delete rsync@10.10.10.1:/var/www/ /var/www/

A configuração da sincronização do Servidor A para o Servidor B foi concluída e será executada uma vez a cada minuto.

4.6 Suporte SSL

Se desejarmos utilizar https nos sites hospedados em nosso cluster de alta disponibilidade, deveremos configurar os certificados digitais no Nginx. Já o Apache deve servir os sites em texto plano como mostra a figura abaixo:

Cenário HTTPS
Clique para ampliar

Vamos adicionar suporte SSL ao VirtualHost www.exemplo.com que já foi configurado no Nginx. Vamos assumir que já temos os certificados SSL em mãos, porém se você precisar de um certificado SSL válido e gratuito recomendo o htttp://www.startssl.com que possui suporte na maioria dos navegadores e possui uma versão gratuita de certificados válidos.

Usaremos o seguinte arquivo /etc/nginx/sites-available/www.exemplo.com:

server {
listen          10.10.10.10:443;
server_name     www.exemplo.com;

access_log      /var/log/nginx/www.exemplo.com.access.log;
error_log       /var/log/nginx/www.exemplo.com.error.log;

ssl on;
ssl_certificate      /etc/apache2/ssl.crt/www.exemplo.com.crt;
ssl_certificate_key  /etc/apache2/ssl.key/www.exemplo.com.key;

ssl_session_cache  shared:SSL:10m;
ssl_session_timeout  5m;
ssl_protocols  TLSv1;
ssl_ciphers HIGH:!ADH:!MD5;
ssl_prefer_server_ciphers   on;
keepalive_timeout    60;

location / {
proxy_pass              http://lb;
proxy_set_header        Host $host;
proxy_set_header        X-Real-IP $remote_addr;
proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;
}
}

4.7 Ativando o Cache do Nginx

A opção de cache do Nginx deve ser usada com cuidado, principalmente quando estamos servindo conteúdo dinâmico ou páginas de sistemas.

Uma boa aplicação do cache seria em páginas estáticas muito acessadas, por exemplo resultados de concursos, páginas iniciais de instituições entre outras aplicações.

Por exemplo, queremos ativar o cache da página www.exemplo.com/fotos/

Iremos alterar o arquivo de configuração /etc/nginx/sites-enabled/www.exemplo.com:

location /fotos/  {
 proxy_pass              http://lb;
 proxy_set_header        Host $host;
 proxy_set_header        X-Real-IP $remote_addr;
 proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;
 # cache settings
 include                 /etc/nginx/conf.d/proxy.conf;
 proxy_cache             cache;
}

As principais diretivas que controlam como o cache funciona são:

  • proxy_cache_min_uses 3; Define o número de hits necessários para uma página entrar em cache. Neste exemplo caso a página configurada seja acessada por 3 vezes ela será inclusa no cache do Nginx por um período de tempo definido na diretiva proxy_cache_valid, e durante este intervalo o servidor Apache não será consultado para esta página.
  • proxy_cache_key          “$scheme$host$uri$is_args$args”; Define a chave que será utilizada para os objetos em cache. No exemplo utilizamos a URL completa com argumentos, mas poderíamos setar cookies nas aplicações para que determinadas páginas não entrassem no cache;
  • proxy_cache_valid     1h; Período de tempo em que uma página pode permanecer no cache. Após expirado este tempo, a página volta a ser consultada nos servidores back end e após proxy_cache_min_uses vezes ela voltará ao cache.

Para mais informações à respeito das diretivas do Nginx recomendo a leitura da documentação on-line no Wiki do projeto em: http://wiki.nginx.org/Modules.

Aumentando o limite de interfaces de rede do qemu.

Quem utiliza o Linux KVM como plataforma de virtualização e utiliza VM’s com muitas interfaces de rede (notavelmente gateways acessando várias VLANS) já deve ter se deparado com um erro do qemu dizendo que se ultrapassou o número máximo de interfaces de rede permitido.

O número máximo de interfaces de rede é definido em tempo de compilação em um header (qemu/net.h)  do qemu. Para aumentarmos esse limite será necessário o código-fonte do qemu:

Ocorre o seguinte erro ao iniciarmos uma VM com mais de 8 interfaces de rede:

qemu: Too Many NICs

Tomando como base um sistema Debian GNU/Linux:

mkdir /usr/src/qemu
cd /usr/src/qemu

Baixe o fonte do qemu:

apt-get source qemu
cd qemu-0.9.1/

edite o arquivo net.h

vim net.h

Altere a constante MAX_NICS de 8 para o valor que vc deseja. (Testei com 12 NIC's e não tive problemas.):

#define MAX_NICS 12

Compile o pacote qemu: (execute o comando de dentro do diretório qemu-0.9.1)

dpkg-buildpackage -rfakeroot -uc -b

Nesse ponto tive que instalar algumas dependências tanto do ambiente quanto do qemu, isso varia de ambiente para ambiente, entretanto resolvi as minhas dependências intalando:

apt-get install libsdl1.2-dev apt-get install debhelper debhelper quilt nasm gcc-3.4 fakeroot
apt-get install libx11-dev  zlib1g-dev texi2html libgnutls-dev libasound2-dev libgpmg1-dev libbrlapi-dev

Desinstalar o qemu original:

apt-get purge qemu

Depois de compilado o pacote .deb será construído em um diretório acima (/usr/src/qemu no meu caso):

cd ..
apt-get install qemu_0.9.1-10lenny1_amd64.deb

Fornecendo rotas estáticas – classless static routing – a clientes pelo protocolo DHCP

Esta solução é bastante interessante, e um tanto pouco documentada, quando se utiliza roteamento inter-vlans em um switch nível 3 por exemplo. Ao invés do gateway da rede fazer o roteamento entre todas as vlans podemos utilizar o próprio switch e fornecer  estas rotas às estações através do dhcpd.

O dhcpd por padrão fornece um mecanismo de fornecer rotas estáticas (option static-routes), porém não são permitidas rotas para redes inteiras, apenas para hosts.

Desta forma é mostrada uma solução que permite fazer roteamento para redes com máscaras não cheias (classless).

Nas definições globais do dhcpd.conf adicione as linhas:

option rfc3442-classless-static-routes code 121 = array of unsigned integer 8;
option ms-classless-static-routes         code 249 = array of unsigned integer 8;

As linhas definem a RFC3442 (The Classless Static Route Option for Dynamic Host Configuration Protocol (DHCP) version 4) sendo o code 121 a implementação RFC e o code 249 a implementação Microsoft desta RFC (no caso de clientes dhcp Windows.

Depois de definidas estas opções, podemos definir as rotas dentro de cada  bloco subnet:

option rfc3442-classless-static-routes 24, 10, 1, 2, 10, 1, 1, 254;
option ms-classless-static-routes 24, 10, 1, 2, 10, 1, 1, 254;

As linhas adicionam uma rota para a rede 10.1.2.0/24 utilizando o gateway 10.1.1.254. Para tal, usa-se a seguinte regra de produção:

[netmask, network address byte 1, network address byte 2, network address byte 3, route byte 1, route byte 2, route byte 3, route byte 4]

Exemplo:

dhcpd.conf: (estou mostrando apenas as opções relevantes ao assunto, insira os parâmetros pertinentes ao seu cenário)

option rfc3442-classless-static-routes code 121 = array of unsigned integer 8;
option ms-classless-static-routes         code 249 = array of unsigned integer 8;

subnet 10.1.1.0 netmask 255.255.255.0 {
range 10.1.1.100 10.1.1.200;
option broadcast-address 10.1.1.255;
option routers 10.1.1.1;
option rfc3442-classless-static-routes 24, 10, 1, 2, 10, 1, 1, 254,
24, 10, 1, 3, 10, 1, 1, 254,
24, 10, 1, 4, 10, 1, 1, 254,
24, 10, 1, 5, 10, 1, 1, 254,
24, 10, 1, 6, 10, 1, 1, 254,
24, 10, 1, 7, 10, 1, 1, 254;
option ms-classless-static-routes 24, 10, 1, 2, 10, 1, 1, 254,
24, 10, 1, 3, 10, 1, 1, 254,
24, 10, 1, 4, 10, 1, 1, 254,
24, 10, 1, 5, 10, 1, 1, 254,
24, 10, 1, 6, 10, 1, 1, 254,
24, 10, 1, 7, 10, 1, 1, 254;
}

Criamos rotas para as subnets 10.1.2.0/24, 10.1.3.0/24, 10.1.4.0/24, 10.1.5.0/24, 10.1.6.0/24, 10.1.7.0/24 utilizando o gateway 10.1.1.254 (nossa switch nível 3 em questão) ao invés do gateway padrão 10.1.1.1.

Obs: Descobri que equipamentos rodando Android (2.3.5) não conseguem setar corretamente o gateway padrão em uma rede que use este recurso. Tive que definir a rota padrão manualmente no dispositivo para que ele pudesse conectar-se à Internet.

OpenLDAP + Samba Domain Controller On Ubuntu 7.10

From: RickyJones (http://www.howtoforge.com/openldap-samba-domain-controller-ubuntu7.10)

Preface

This document is a step by step guide for configuring Ubuntu 7.10 as a Samba Domain Controller with an LDAP backend (OpenLDAP). The point is to configure a server that can be comparable, from a central authentication point of view, to a Windows Server 2003 Domain Controller. The end result will be a server with an LDAP directory for storing user, group, and computer accounts. A Windows XP Professional SP2 workstation will be able to join the domain once properly configured. Please note that you do not have a fully comparable Windows domain controller at this time. Do not kid yourself, this guide only gets you a server with LDAP authentication. Of course this can be expanded to include slave servers to spread out authentication over multiple networks. Please also note that it took me approximately two and a half weeks to compile this information and get it working. The same functionality can be had in Windows in less than four hours (and this includes operating system installation). In my humble opinion the open source community will need to work on this side of Linux in order for it to be a true alternative to Windows.

Legal/Warranty/Etc…

This document is provided as-is with no implied warranty or agreement. I will not support other systems without compensation. This document is the property of Richard Maloley II. This document may be redistributed, copied, printed, and modified at will, however my name must remain as the original source. Legal action can and will be brought against any and all infractions of the terms.

Special Items of Interest

* My hostname during the installation was set to: dc01-ubuntu
* My fully qualified domain name will be: dc01-ubuntu.example.local
* After the installation my /etc/hostname was changed to: dc01-ubuntu.example.local
* After the installation my /etc/hosts was changed so that the line 127.0.1.1 contained “dc01-ubuntu dc01-ubuntu.example.local” to ensure no issues with name resolution.
* My LDAP domain is: example.local
* This translates to a Base DN of: dc=example,dc=local
* All passwords used are “12345” to keep things simple.
* I am not using TLS or SSL for my LDAP directory. Too much work for this tutorial.
* The user I created during the installation is: sysadmin
* The password I assigned during the installation is: 12345
* This local user will be used for all configuration purposes.

Assumptions

* Ubuntu Server 7.10 is installed.
* No other software was installed during the OS install!
* After installation you enabled all the repositories in /etc/apt/sources.list
* You fully updated your system

apt-get update
apt-get upgrade
reboot

* You configured a static IP address. For me I used the following information:

address 192.168.0.60
gateway 192.168.0.1
netmask 255.255.255.0

* You edited your /etc/hosts file so that your hostname and fully qualified domain name are on the line 127.0.1.1

127.0.1.1 dc01-ubuntu dc01-ubuntu.example.local

* You installed the OpenSSH Server.

apt-get install openssh-server

* You did not set a password on the root account. All commands will be run with sudo or by opening a root shell.

sudo bash

* Currently you do not have any other software running nor do you have any other users on the system.

Step 1: Install WebMin

We will be installing WebMin. Why? I like to use it to configure some things. This step is techinically optional but I feel as though it greatly simplifies administration of the server in the future.

# Download the WebMin package from their website.

wget http://superb-west.dl.sourceforge.net/sourceforge/webadmin/webmin_1.380_all.deb

# Install pre-requisite software.

apt-get install openssl libauthen-pam-perl libio-pty-perl libmd5-perl libnet-ssleay-perl

# Install WebMin

dpkg -i webmin_1.380_all.deb

# If the installation is successful you will see a message similar to this:

“Webmin install complete. You can now login to https://dc01-ubuntu.example.local:10000/
as root with your root password,
or as any user who can use sudo to run commands as root.”

Step 2: Install OpenLDAP

For our LDAP server we will be using the very flexible OpenLDAP Server (slapd).

# Install the software.

apt-get install slapd ldap-utils migrationtools

# Answer the on-screen prompts with:

Admin password: 12345
Confirm password: 12345

# We need to configure OpenLDAP now.

dpkg-reconfigure slapd

# Answer the on-screen prompts with:

No
DNS domain name: example.local
Name of your organization: example.local
Admin password: 12345
Confirm password: 12345
OK
BDB
No
Yes
No

# Restart OpenLDAP.

/etc/init.d/slapd restart

Step 3: Install SAMBA

We will be using SAMBA for some main functions in this tutorial. In order to configure OpenLDAP correctly we must first install SAMBA.

# Install the software.

apt-get install samba smbldap-tools smbclient samba-doc

Step 4: Configure OpenLDAP for use with SAMBA

In order to use LDAP and SAMBA we need to configure the /etc/ldap/slapd.conf file.

# Copy the samba.schema file to the OpenLDAP schema directory.

cp /usr/share/doc/samba-doc/examples/LDAP/samba.schema.gz /etc/ldap/schema/

# Unzip the file.

gzip -d /etc/ldap/schema/samba.schema.gz

# Open the /etc/ldap/slapd.conf file for editing.

vim /etc/ldap/slapd.conf

# Add the following lines to the document where the other “include” lines are:

include         /etc/ldap/schema/samba.schema
include         /etc/ldap/schema/misc.schema

# Change the line:

access to attribute=userPassword

# to:

access to attrs=userPassword,sambaNTPassword,sambaLMPassword

# Restart OpenLDAP:

/etc/init.d/slapd restart

Step 5: Configure SAMBA

Now we need to configure SAMBA. This includes configuring the /etc/samba/smb.conf file.

# Open up the SAMBA directory.

cd /etc/samba/

# Backup the samba configuration file.

cp smb.conf smb.conf.original

# Open the samba configuration file for editing.

vim smb.conf

# Make the following changes throughout the file:

workgroup = EXAMPLE
security = user
passdb backend = ldapsam:ldap://localhost/
obey pam restrictions = no
#######################################################################
#COPY AND PASTE THE FOLLOWING UNDERNEATH "OBEY PAM RESTRICTIONS = NO"
#######################################################################
#
#	Begin: Custom LDAP Entries
#
ldap admin dn = cn=admin,dc=example,dc=local
ldap suffix = dc=example, dc=local
ldap group suffix = ou=Groups
ldap user suffix = ou=Users
ldap machine suffix = ou=Computers
ldap idmap suffix = ou=Users
; Do ldap passwd sync
ldap passwd sync = Yes
passwd program = /usr/sbin/smbldap-passwd %u
passwd chat = *New*password* %nn *Retype*new*password* %nn *all*authentication*tokens*updated*
add user script = /usr/sbin/smbldap-useradd -m "%u"
ldap delete dn = Yes
delete user script = /usr/sbin/smbldap-userdel "%u"
add machine script = /usr/sbin/smbldap-useradd -w "%u"
add group script = /usr/sbin/smbldap-groupadd -p "%g"
delete group script = /usr/sbin/smbldap-groupdel "%g"
add user to group script = /usr/sbin/smbldap-groupmod -m "%u" "%g"
delete user from group script = /usr/sbin/smbldap-groupmod -x "%u" "%g"
set primary group script = /usr/sbin/smbldap-usermod -g "%g" "%u"
domain logons = yes
#
#	End: Custom LDAP Entries
#
#####################################################
#STOP COPYING HERE!
#####################################################

# Comment out the line:

invalid users = root

# Add the following line:

logon path =

# Restart SAMBA.

/etc/init.d/samba restart

# Give SAMBA the “admin” password to the LDAP tree.

smbpasswd -w 12345

Step 6: Configure the SMBLDAP-TOOLS package.

We will be using the smbldap-tools package to populate our directory, add users, add workstations, etc… But, the tools need to be configured first!

# Open up the examples directory.

cd /usr/share/doc/smbldap-tools/examples/

# Copy the configuration files to /etc/smbldap-tools:

cp smbldap_bind.conf /etc/smbldap-tools/
cp smbldap.conf.gz /etc/smbldap-tools/

# Unzip the configuration file.

gzip -d /etc/smbldap-tools/smbldap.conf.gz

# Open up the /etc/smbldap-tools directory.

cd /etc/smbldap-tools/

# Get the SID (Security ID) for your SAMBA domain.

net getlocalsid

This results in (example): SID for domain DC01-UBUNTU is: S-1-5-21-949328747-3404738746-3052206637

# Open the /etc/smbldap-tools/smbldap.conf file for editing.

vim smbldap.conf

# Edit the file so that the following information is correct (according to your individual setup):

SID="S-1-5-21-949328747-3404738746-3052206637" ## This line must have the same SID as when you ran "net getlocalsid"
sambaDomain="EXAMPLE"
ldapTLS="0"
suffix="dc=example,dc=local"
sambaUnixIdPooldn="sambaDomainName=EXAMPLE,${suffix}"
userSmbHome=
userProfile=
userHomeDrive=
userScript=
mailDomain="example.local"

# Open the /etc/smbldap-tools/smbldap_bind.conf file for editing.

vim smbldap_bind.conf

# Edit the file so that the following information is correct (according to your individual setup):

slaveDN="cn=admin,dc=example,dc=local"
slavePw="12345"
masterDN="cn=admin,dc=example,dc=local"
masterPw="12345"

# Set the correct permissions on the above files:

chmod 0644 /etc/smbldap-tools/smbldap.conf
chmod 0600 /etc/smbldap-tools/smbldap_bind.conf

Step 7: Populate LDAP using smbldap-tools

Now we need to populate our LDAP directory with some necessary SAMBA and Windows entries.

# Execute the command to populate the directory.

smbldap-populate -u 30000 -g 30000

# At the password prompt assign your root password:

12345

# Verify that the directory has information in it by running the command:

ldapsearch -x -b dc=example,dc=local | less

Step 8: Add an LDAP user to the system

It is time for us to add an LDAP user. We will use this user account to verify that LDAP authentication is working.

# Add the user to LDAP

smbldap-useradd -a -m -M ricky -c “Richard M” ricky

# Here is an explanation of the command switches that we used.

-a allows Windows as well as Linux login
-m makes a home directory, leave this off if you do not need local access
-M sets up the username part of their email address
-c specifies their full name

# Set the password the new account.

smbldap-passwd ricky
# Password will be: 12345

Step 9: Configure the server to use LDAP authentication.

The basic steps for this section came from the Ubuntu Forums (http://ubuntuforums.org/showthread.php?t=597056). Thanks to all who contributed to that thread! Basically we need to tell our server to use LDAP authentication as one of its options. Be careful with this! It can cause your server to break! This is why we always have a backup around.

# Install the necessary software for this to work.

apt-get install auth-client-config libpam-ldap libnss-ldap

# Answer the prompts on your screen with the following:

Should debconf manage LDAP configuration?: Yes
LDAP server Uniform Resource Identifier: ldapi://127.0.0.1
Distinguished name of the search base: dc=example,dc=local
LDAP version to use: 3
Make local root Database admin: Yes
Does the LDAP database require login? No
LDAP account for root: cn=admin,dc=example,dc=local
LDAP root account password: 12345

# Open the /etc/ldap.conf file for editing.

vim /etc/ldap.conf

# Configure the following according to your setup:

host 127.0.0.1
base dc=example,dc=local
uri ldap://127.0.0.1/
rootbinddn cn=admin,dc=example,dc=local
bind_policy soft

# Copy the /etc/ldap.conf file to /etc/ldap/ldap.conf

cp /etc/ldap.conf /etc/ldap/ldap.conf

# Create a new file /etc/auth-client-config/profile.d/open_ldap:

vim /etc/auth-client-config/profile.d/open_ldap

# Insert the following into that new file:

[open_ldap]
nss_passwd=passwd: compat ldap
nss_group=group: compat ldap
nss_shadow=shadow: compat ldap
pam_auth=auth       required     pam_env.so
 auth       sufficient   pam_unix.so likeauth nullok
 auth       sufficient   pam_ldap.so use_first_pass
 auth       required     pam_deny.so
pam_account=account    sufficient   pam_unix.so
 account    sufficient   pam_ldap.so
 account    required     pam_deny.so
pam_password=password   sufficient   pam_unix.so nullok md5 shadow use_authtok
 password   sufficient   pam_ldap.so use_first_pass
 password   required     pam_deny.so
pam_session=session    required     pam_limits.so
 session    required     pam_mkhomedir.so skel=/etc/skel/
 session    required     pam_unix.so
 session    optional     pam_ldap.so

# Backup the /etc/nsswitch.conf file:

cp /etc/nsswitch.conf /etc/nsswitch.conf.original

# Backup the /etc/pam.d/ files:

cd /etc/pam.d/
mkdir bkup
cp * bkup/

# Enable the new LDAP Authentication Profile by executing the following command:

auth-client-config -a -p open_ldap

# Reboot the server and test to ensure that you can still log in using SSH and LDAP.

reboot

Step 10: Install BIND (DNS Server)

Because we are going to be a domain controller and source for authentication it makes sense to also have some DNS services available. Please note that if you have multiple servers at your disposal it is recommended to install a seperate DNS server as well so we have two to look at.

# Install the software.

apt-get install bind9

Step 11: Configure our primary DNS Zone using WebMin

We now want to create our DNS zone so that we are in charge of it and can make use of it. I prefer using a GUI to do this as opposed to editing the zone files.

In a web browser navigate to: https://192.168.0.60:10000 (Please use the IP address that YOU assigned to your server.)
Login as “sysadmin” and “12345”.
Servers > BIND DNS Server
Under “Existing DNS Zones” click “Create master zone”.

Zone type: Forward (Names to Addresses)
Domain name / Network: example.local
Records file: Automatic
Master server: dc01-ubuntu.example.local
Email address: sysadmin@example.local

Click “Create” button.

Click “Apply Changes” button.

Click “Address (0)” at the top.

Name: dc01-ubuntu
Address: 192.168.0.60
Click “Create” button
Click “Return to record types”

Click “Apply Changes” button.

Step 12: Configure the server to use itself for DNS

DNS doesn’t do a whole lot of good if we don’t use it. In this section we point our /etc/resolv.conf file to ourselves. I also recommend leaving in a known working DNS server as the seconday source just in case something screws up. In some of my trials I did notice that the server would hang trying to start BIND9.

# Open the /etc/resolv.conf file for editing.

vim /etc/resolv.conf

# Add the following lines to the beginning of the file:

search example.local
nameserver 192.168.0.60

# Reboot the server to ensure that DNS is working correctly.

reboot

Step 13: Add a workstation account to LDAP

This tutorial is meant to create an opensource domain for Windows XP Professional client (and Linux clients) to authenticate against. Therefore we will add a workstation account for the Windows XP Professional workstation that we will be joining to the domain.

# Execute the command:

smbldap-useradd -w client-winxp

* “client-winxp” is the hostname of the computer that you will be adding to the domain. This must be very specific!

Step 14: Configure your Windows XP Professional Client

Now I will walk you through configuring your Windows XP Professional workstation so that it will join the domain.

# Assumptions:

* This is a vanilla installation of Windows XP Professional SP2.
* The computer name was set during installation to be: client-winxp
* The Administrator password assigned is: 12345
* All other installation options have been left at their default settings.
* After the installation the following occurred:
* The only user account on the computer in use was “Administrator”
* All available Windows Updates were installed.
* A static IP address was assigned with the following information (for my setup only!)

IP Address: 192.168.0.61
Gateway: 192.168.0.1
Netmask: 255.255.255.0
DNS: 192.168.0.60
Search domain: example.local

# Join the workstation to the domain.

* Log into the computer as Administrator.
* Right click “My Computer” and click “Properties”.
* Click the tab “Computer Name”.
* Click the button labeled “Change”.
* At the bottom click the radial button labeled “Domain”.
* In the box type the word “example” without quotes!
* Click the “OK” button.
* At the password prompt enter “root” for the user and “12345” for the password (substitute the password for what you assigned to your root user earlier!).

It should say “Welcome to the example domain.”
* Click “OK”.
* Click “OK” again.
* Click “OK” again.
Restart the workstation.

# Log in with your test user (“ricky”) from earlier.
Try logging into the Windows XP workstation (after selecting the domain from the drop down box) using our test user. It should work without issue!

# Notes
Please note that this is basic authentication right now. You’re on your own if you wish to add logon scripts, mapped drives, etc…

Step 15: (Optional) Install Apache2 and PHPLDAPAdmin

A nice way to view and modify your LDAP tree is with a GUI. PHPLDAPAdmin is one that many people recommend so I will show you how to install it and use it.

# Install the software.

apt-get install apache2 phpldapadmin

# Open the file /etc/apache2/httpd.conf for editing:

vim /etc/apache2/httpd.conf

# Add the following line to the top of the file. This prevents an annoying error message from Apache2.

ServerName dc01-ubuntu.example.local

# Restart Apache2

/etc/init.d/apache2 restart

# Copy the PHPLDAPAdmin folder into the main web site directory. This is the lazy way of doing things. This way we don’t need to create a virtual server, we just access PHPLDAPAdmin by going to: http://192.168.0.60/phpldapadmin/

cp -R /usr/share/phpldapadmin/ /var/www/phpldapadmin

There you have it! A full Ubuntu LDAP and SAMBA Domain Controller in 15 easy steps.

Feeding entropy to GnuPG on Fedora

From: Aaron S. Hawley (http://aaronhawley.livejournal.com/10807.html)

In a previous post, I mentioned we are putting together an RPM build server at work. The RPMs that are built are signed by an encryption key and uploaded to the Yum server. The GnuPG (GPG) signing will give us confidence that the RPMs were from the build server and weren’t tampered with since they were built and copied to the Yum repository.

At this point, the security of the signing key is not important. I say this confidently even after the recent package signing compromise at Fedora and Red Hat. We want to have automated package signing and we’re only building packages for distribution inside the office.

One nice feature of GnuPG is its automatic key generation. The RPM build server is generating its own key, and preferably as non-interactive as possible. Unfortunately, this requires entropy to work consistently.

For information about automatically generating keys with GPG see the section “Unattended key generation” in the DETAILS file that comes with GnuPG. That documentation can be found on a GNU/Linux system with the following command.

  $ less -p "^Unattended" /usr/share/doc/gnupg-*/DETAILS

As the summary says:

This feature allows unattended generation of keys controlled by a parameter file. To use this feature, you use --gen-key together with --batch and feed the parameters either from stdin or from a file given on the command line [sic].

Here’s an example of automatically generating a secret GPG key.

  $ cat gpg-key.conf
  %echo Generating a package signing key
  Key-Type: DSA
  Key-Length: 1024
  Subkey-Type: ELG-E
  Subkey-Length: 2048
  Name-Real: Build Server
  Name-Email: builds@site.org
  Expire-Date: 0
  Passphrase: Does not ex1st!
  %commit
  %echo Done
  $ gpg --batch --gen-key gpg-key.conf
        > gpg-keygen.log
        2> gpg-keygen_error.log

Those familliar with generating keys know that it is an extremely interactive process. Not just for entering the details about the key, but because you need to inject entropy into the computer to ensure the newly generated key is random. (Debian had erroneously weakened the random number generation in a security-related package necessitating a significant response to those systems affected by the vulnerability.) Usually, GnuPG receives entropy by jiggling the mouse or banging on the keyboard. As the GnuPG README says:

If you see no progress during key generation you should start some other activities such as moving the mouse or hitting the CTRL and SHIFT keys. Generate a key only on a machine where you have direct physical access – don’t do it over the network or on a machine also used by others, especially if you have no access to the root account. (original emphasis)

This becomes a problem on servers that don’t have mice or keyboards attached. One would typically see the following message from GnuPG complaining about not having enough entropy.

  $ gpg --batch --gen-key gpg-key.conf
  gpg: Generating a package signing key
  .++++++++++++++++++++...+++++..++++++++++++++++++++++++++++++++++++++++++++++++
  +++++++.+++++++++++++++++++++++++++++++++++++++++++++++++++++++..>+++++...+++++

  Not enough random bytes available.  Please do some other work to give
  the OS a chance to collect more entropy! (Need 123 more bytes)

  gpg: Interrupt caught ... exiting

As a sidebar, the “Key generation” section of the DETAILS file explains all those special characters spit to the screen when the key is generated.

    Key generation shows progress by printing different characters to
    stderr:
	     "."  Last 10 Miller-Rabin tests failed
	     "+"  Miller-Rabin test succeeded
	     "!"  Reloading the pool with fresh prime numbers
	     "^"  Checking a new value for the generator
	     "<"  Size of one factor decreased
	     ">"  Size of one factor increased

I tried various complicated strategies of creating entropy on a headless system to no success. One of them was piping the output of /dev/random into /dev/urandom and visa verse. Let’s see if I can rehash it here.

  $ b=2048;
    future=$(date -d'+6 seconds' +'%s' );
    while [ ${future} -gt $(date +'%s') ]; do
      head -c b /dev/random > /dev/urandom;
      head -c ${b} /dev/urandom > /dev/random;
    done &
  $ gpg --batch --gen-key gpg-key.conf

Anyway, it didn’t work.

Running this does, though.

  # rngd -r /dev/urandom

The rngd service provides “true random number generation” (RNG). It comes as part of the rng-tools package.

According to the documentation in the Linux kernel:

The hw_random framework is software that makes use of a special hardware feature on your CPU or motherboard, a Random Number Generator (RNG). The software has two parts: a core providing the /dev/hw_random character device and its sysfs support, plus a hardware-specific driver that plugs into that core.

In Fedora, this package can be installed with Yum.

  # yum install rng-utils

I’ve arrived on Planet Fedora. Planet Fedora is an aggregation of article feeds from members of the Fedora Project — a community project affiliated with Red Hat that distributes the GNU/Linux operating system.

How to remove the gdm user list in Ubuntu 10.04

Put the following in /var/lib/gdm/.gconf/apps/gdm/simple-greeter/%gconf.xml:

<?xml version="1.0"?>
<gconf>
 <entry name="recent-layouts" mtime="1276112027" type="list" ltype="string">
 <li type="string">
 <stringvalue>br</stringvalue>
 </li>
 </entry>
 <entry name="recent-languages" mtime="1276112026" type="list" ltype="string">
 <li type="string">
 <stringvalue>pt_BR.utf8</stringvalue>
 </li>
 </entry>
 <entry name="disable_user_list" mtime="1227900586" type="bool" value="true">
 </entry>
</gconf>

Setting Up A High-Availability Load Balancer (With Failover and Session Support) With HAProxy/Keepalived On Debian Etch

Version 1.0
Author: Falko Timme <ft [at] falkotimme [dot] com>
Last edited 10/16/2007

This article explains how to set up a two-node load balancer in an active/passive configuration with HAProxy and keepalived on Debian Etch. The load balancer sits between the user and two (or more) backend Apache web servers that hold the same content. Not only does the load balancer distribute the requests to the two backend Apache servers, it also checks the health of the backend servers. If one of them is down, all requests will automatically be redirected to the remaining backend server. In addition to that, the two load balancer nodes monitor each other using keepalived, and if the master fails, the slave becomes the master, which means the users will not notice any disruption of the service. HAProxy is session-aware, which means you can use it with any web application that makes use of sessions (such as forums, shopping carts, etc.).

From the HAProxy web site: “HAProxy is a free, very fast and reliable solution offering high availability, load balancing, and proxying for TCP and HTTP-based applications. It is particularly suited for web sites crawling under very high loads while needing persistence or Layer7 processing. Supporting tens of thousands of connections is clearly realistic with todays hardware. Its mode of operation makes its integration into existing architectures very easy and riskless, while still offering the possibility not to expose fragile web servers to the Net.”

I do not issue any guarantee that this will work for you!

1 Preliminary Note

In this tutorial I will use the following hosts:

  • Load Balancer 1: lb1.example.com, IP address: 192.168.0.100
  • Load Balancer 2: lb2.example.com, IP address: 192.168.0.101
  • Web Server 1: http1.example.com, IP address: 192.168.0.102
  • Web Server 2: http2.example.com, IP address: 192.168.0.103
  • We also need a virtual IP address that floats between lb1 and lb2: 192.168.0.99

Here’s a little diagram that shows our setup:

shared IP=192.168.0.99
192.168.0.100  192.168.0.101 192.168.0.102 192.168.0.103
——-+————+————–+———–+———-
|            |              |           |
+–+–+      +–+–+      +—-+—-+ +—-+—-+
| lb1 |      | lb2 |      |  http1  | |  http2  |
+—–+      +—–+      +———+ +———+
haproxy      haproxy      2 web servers (Apache)
keepalived   keepalived

The shared (virtual) IP address is no problem as long as you’re in your own LAN where you can assign IP addresses as you like. However, if you want to use this setup with public IP addresses, you need to find a hoster where you can rent two servers (the load balancer nodes) in the same subnet; you can then use a free IP address in this subnet for the virtual IP address. Here in Germany, Hetzner is a hoster that allows you to do this – just talk to them. Update: Hetzner’s policies have changed – please read here for more details: http://www.howtoforge.com/forums/showthread.php?t=19988

http1 and http2 are standard Debian Etch Apache setups with the document root /var/www (the configuration of this default vhost is stored in /etc/apache2/sites-available/default). If your document root differs, you might have to adjust this guide a bit.

To demonstrate the session-awareness of HAProxy, I’m assuming that the web application that is installed on http1 and http2 uses the session id JSESSIONID.

2 Preparing The Backend Web Servers

We will configure HAProxy as a transparent proxy, i.e., it will pass on the original user’s IP address in a field called X-Forwarded-For to the backend web servers. Of course, the backend web servers should log the original user’s IP address in their access logs instead of the IP addresses of our load balancers. Therefore we must modify the LogFormat line in /etc/apache2/apache2.conf and replace %h with %{X-Forwarded-For}i:

http1/http2:

vi /etc/apache2/apache2.conf

[...]
#LogFormat "%h %l %u %t "%r" %>s %b "%{Referer}i" "%{User-Agent}i"" combined
LogFormat "%{X-Forwarded-For}i %l %u %t "%r" %>s %b "%{Referer}i" "%{User-Agent}i"" combined
[...]

Also, we will configure HAProxy to check the backend servers’ health by continuously requesting the file check.txt (translates to /var/www/check.txt if /var/www is your document root) from the backend servers. Of course, these requests would totally bloat the access logs and mess up your page view statistics (if you use a tool like Webalizer or AWstats that generates statistics based on the access logs).

Therefore we open our vhost configuration (in this example it’s in /etc/apache2/sites-available/default) and put these two lines into it (comment out all other CustomLog directives in your vhost configuration):

vi /etc/apache2/sites-available/default

[...]
SetEnvIf Request_URI "^/check.txt$" dontlog
CustomLog /var/log/apache2/access.log combined env=!dontlog
[...]

This configuration prevents that requests to check.txt get logged in Apache’s access log.

Afterwards we restart Apache:

/etc/init.d/apache2 restart

… and create the file check.txt (this can be an empty file):

touch /var/www/check.txt

We are finished already with the backend servers; the rest of the configuration happens on the two load balancer nodes.

3 Installing HAProxy

Unfortunately HAProxy is available as a Debian package for Debian Lenny (testing) and Sid (unstable), but not for Etch. Therefore we will install the HAProxy package from Lenny. To do this, open /etc/apt/sources.list and add the line deb http://ftp2.de.debian.org/debian/ lenny main; your /etc/apt/sources.list could then look like this:

lb1/lb2:

vi /etc/apt/sources.list

deb http://ftp2.de.debian.org/debian/ etch main
deb-src http://ftp2.de.debian.org/debian/ etch main

deb http://ftp2.de.debian.org/debian/ lenny main

deb http://security.debian.org/ etch/updates main contrib
deb-src http://security.debian.org/ etch/updates main contrib

Of course (in order not to mess up our system), we want to install packages from Lenny only if there’s no appropriate package from Etch – if there are packages from Etch and Lenny, we want to install the one from Etch. To do this, we give packages from Etch a higher priority in /etc/apt/preferences:

vi /etc/apt/preferences

Package: *
Pin: release a=etch
Pin-Priority: 700

Package: *
Pin: release a=lenny
Pin-Priority: 650

(The terms etch and lenny refer to the appropriate terms in /etc/apt/sources.list; if you’re using stable and testing there, you must use stable and testing instead of etch and lenny in /etc/apt/preferences as well.)

Afterwards, we update our packages database:

apt-get update

… upgrade the installed packages:

apt-get upgrade

… and install HAProxy:

apt-get install haproxy

4 Configuring The Load Balancers

The HAProxy configuration is stored in /etc/haproxy.cfg and is pretty straight-forward. I won’t explain all the directives here; to learn more about all options, please read http://haproxy.1wt.eu/download/1.3/doc/haproxy-en.txt and http://haproxy.1wt.eu/download/1.2/doc/architecture.txt.

We back up the original /etc/haproxy.cfg and create a new one like this:

lb1/lb2:

cp /etc/haproxy.cfg /etc/haproxy.cfg_orig
cat /dev/null > /etc/haproxy.cfg
vi /etc/haproxy.cfg

global
        log 127.0.0.1   local0
        log 127.0.0.1   local1 notice
        #log loghost    local0 info
        maxconn 4096
        #debug
        #quiet
        user haproxy
        group haproxy

defaults
        log     global
        mode    http
        option  httplog
        option  dontlognull
        retries 3
        redispatch
        maxconn 2000
        contimeout      5000
        clitimeout      50000
        srvtimeout      50000

listen webfarm 192.168.0.99:80
       mode http
       stats enable
       stats auth someuser:somepassword
       balance roundrobin
       cookie JSESSIONID prefix
       option httpclose
       option forwardfor
       option httpchk HEAD /check.txt HTTP/1.0
       server webA 192.168.0.102:80 cookie A check
       server webB 192.168.0.103:80 cookie B check

Afterwards, we set ENABLED to 1 in /etc/default/haproxy:

vi /etc/default/haproxy

# Set ENABLED to 1 if you want the init script to start haproxy.
ENABLED=1
# Add extra flags here.
#EXTRAOPTS="-de -m 16"

5 Setting Up keepalived

We’ve just configured HAProxy to listen on the virtual IP address 192.168.0.99, but someone has to tell lb1 and lb2 that they should listen on that IP address. This is done by keepalived which we install like this:

lb1/lb2:

apt-get install keepalived

To allow HAProxy to bind to the shared IP address, we add the following line to /etc/sysctl.conf:

vi /etc/sysctl.conf

[...]
net.ipv4.ip_nonlocal_bind=1

… and run:

sysctl -p

Next we must configure keepalived (this is done through the configuration file /etc/keepalived/keepalived.conf). I want lb1 to be the active (or master) load balancer, so we use this configuration on lb1:

lb1:

vi /etc/keepalived/keepalived.conf

vrrp_script chk_haproxy {           # Requires keepalived-1.1.13
        script "killall -0 haproxy"     # cheaper than pidof
        interval 2                      # check every 2 seconds
        weight 2                        # add 2 points of prio if OK
}

vrrp_instance VI_1 {
        interface eth0
        state MASTER
        virtual_router_id 51
        priority 101                    # 101 on master, 100 on backup
        virtual_ipaddress {
            192.168.0.99
        }
        track_script {
            chk_haproxy
        }
}

(It is important that you use priority 101 in the above file – this makes lb1 the master!)

Then we start keepalived on lb1:

lb1:

/etc/init.d/keepalived start

Then run:

lb1:

ip addr sh eth0

… and you should find that lb1 is now listening on the shared IP address, too:

lb1:/etc/keepalived# ip addr sh eth0
2: eth0: <BROADCAST,MULTICAST,UP,10000> mtu 1500 qdisc pfifo_fast qlen 1000
link/ether 00:0c:29:a5:5b:93 brd ff:ff:ff:ff:ff:ff
inet 192.168.0.100/24 brd 192.168.0.255 scope global eth0
inet 192.168.0.99/32 scope global eth0
inet6 fe80::20c:29ff:fea5:5b93/64 scope link
valid_lft forever preferred_lft forever
lb1:/etc/keepalived#

Now we do almost the same on lb2. There’s one small, but important difference – we use priority 100 instead of priority 101 in /etc/keepalived/keepalived.conf which makes lb2 the passive (slave or hot-standby) load balancer:

lb2:

vi /etc/keepalived/keepalived.conf

vrrp_script chk_haproxy {           # Requires keepalived-1.1.13
        script "killall -0 haproxy"     # cheaper than pidof
        interval 2                      # check every 2 seconds
        weight 2                        # add 2 points of prio if OK
}

vrrp_instance VI_1 {
        interface eth0
        state MASTER
        virtual_router_id 51
        priority 100                    # 101 on master, 100 on backup
        virtual_ipaddress {
            192.168.0.99
        }
        track_script {
            chk_haproxy
        }
}

Then we start keepalived:

lb2:

/etc/init.d/keepalived start

As lb2 is the passive load balancer, it should not be listening on the virtual IP address as long as lb1 is up. We can check that with:

lb2:

ip addr sh eth0

The output should look like this:

lb2:~# ip addr sh eth0
2: eth0: <BROADCAST,MULTICAST,UP,10000> mtu 1500 qdisc pfifo_fast qlen 1000
link/ether 00:0c:29:e0:78:92 brd ff:ff:ff:ff:ff:ff
inet 192.168.0.101/24 brd 192.168.0.255 scope global eth0
inet6 fe80::20c:29ff:fee0:7892/64 scope link
valid_lft forever preferred_lft forever
lb2:~#

6 Starting HAProxy

Now we can start HAProxy:

lb1/lb2:

/etc/init.d/haproxy start

7 Testing

Our high-availability load balancer is now up and running.

You can now make HTTP requests to the virtual IP address 192.168.0.99 (or to any domain/hostname that is pointing to the virtual IP address), and you should get content from the backend web servers.

You can test its high-availability/failover capabilities by switching off one backend web server – the load balancer should then redirect all requests to the remaining backend web server. Afterwards, switch off the active load balancer (lb1) – lb2 should take over immediately. You can check that by running:

lb2:

ip addr sh eth0

You should now see the virtual IP address in the output on lb2:

lb2:~# ip addr sh eth0
2: eth0: <BROADCAST,MULTICAST,UP,10000> mtu 1500 qdisc pfifo_fast qlen 1000
link/ether 00:0c:29:e0:78:92 brd ff:ff:ff:ff:ff:ff
inet 192.168.0.101/24 brd 192.168.0.255 scope global eth0
inet 192.168.0.99/32 scope global eth0
inet6 fe80::20c:29ff:fee0:7892/64 scope link
valid_lft forever preferred_lft forever
lb2:~#

When lb1 comes up again, it will take over the master role again.

8 HAProxy Statistics

You might have noticed that we have used the options stats enable and stats auth someuser:somepassword in the HAProxy configuration in chapter 4. This allow us to access (password-protected) HAProxy statistics under the URL http://192.168.0.99/haproxy?stats. This is how it looks:

1

If you don’t need the statistics, just comment out or remove the stats lines from the HAProxy configuration.

9 Links

kvm: windows 2000 does not boot because ntoskrnl.exe is missing or corrupt

Originally from: http://riaschissl.blogspot.com/2009/06/kvm-windows-2000-does-not-boot-because.html

We are currently in the process of migrating all our VMWare virtual hosts to KVM [1]. Sometimes this can be quite difficult because KVM apparently has some issues with older guest OS such was Windows 2000.

So, if you try to boot Windows 2000, you might fail like this:

Disk I/o error: Status = 00000001
Disk I/o error: Status = 00000001
Disk I/o error: Status = 00000001

Windows 2000 could not start because the following file is missing or corrupt:

<windows 2000 root>system32ntoskrnl.exe.

The workaround [2] for that problem is not to mark the primary harddisk as the boot device but the CD-ROM device instead.

Update:
The workaround does not work as expected, unfortunately. It only works until you leave a bootable CD in the CD-ROM drive but don’t actually boot from it. However, Windows installation CDs have the feature to tell you “press any key to boot from CD” before the actually boot from the CD, otherwise they continue with the other potential boot devices.

So for the time being, I have created an ISO image of the installation disk, attached it to the guest as a CD-ROM drive and boot from it 🙂

[1] http://www.linux-kvm.org/
[2] http://www.mail-archive.com/kvm@vger.kernel.org/msg04157.html