分类目录归档:Uncategorized

How to assign multiple IP addresses to one network interface on CentOS

The practice of configuring multiple IP addresses on a particular network interface is called IP aliasing. IP aliasing is useful when you set up multiple sites on virtual web hosting on a single interface, or maintain multiple connections to a network each of which serves a different purpose. You can assign multiple IP addresses to one network interface from a single subnet or completely different ones.

All existing Linux distributions including CentOS supports IP aliasing. Here is how to bind multiple IP addresses to a single network interface on CentOS.

If you would like to set up IP aliasing on the fly, there are two ways to do it. One way is to use ifconfig, and the other method is to use ip command. Using these two methods, let me show you how to add two extra IP addresses to eth0. 继续阅读

Linux、Windows Server Password Security Policy Strengthen

catalog

1. windows Security and Protection(Logon and Authentication)
2. windows密码强制安全策略
3. PAM(Pluggable Authentication Modules)
4. linux密码强制安全策略配置

 

1. windows Security and Protection(Logon and Authentication)

This page lists resources for logon and authentication in Windows Server 2003, which includes passwords, Kerberos, NTLM, Transport Layer Security/Secure Sockets Layer (TLS/SSL), and Digest. In addition, some protocols are combined into authentication packages, such as Negotiate and Schannel, as part of an extensible authentication architecture.

0x1: Create an extensive defense model

1. Educate your users about how to best protect their accounts from unauthorized attacks 
https://technet.microsoft.com/en-us/library/cc784090#BKMK_UserBP

2. Use the system key utility (Syskey) on computers throughout your network. The system key utility uses strong encryption techniques to secure account password information that is stored in the Security Accounts Manager (SAM) database. 
    1) The system key utility: https://technet.microsoft.com/en-us/library/cc783856
    2) create or update a system key: 

3. Define password policy that ensures that every user is following the password guidelines that you decide are appropriate 
https://technet.microsoft.com/en-us/library/cc784090#BKMK_PasswordPolicy

4. Consider whether implementing account lockout policy is appropriate for your organization. 
https://technet.microsoft.com/en-us/library/cc784090#BKMK_AccountLockout
 

继续阅读

How to Create and Setup LUNs using LVM in “iSCSI Target Server” on RHEL/CentOS/Fedora – Part II

LUN is a Logical Unit Number, which shared from the iSCSI Storage Server. The Physical drive of iSCSI target server shares its drive to initiator over TCP/IP network. A Collection of drives called LUNs to form a large storage as SAN (Storage Area Network). In real environment LUNs are defined in LVM, if so it can be expandable as per space requirements.

Create LUNS using LVM in Target Server

Create LUNS using LVM in Target Server

Why LUNS are Used?

LUNS used for storage purpose, SAN Storage’s are build with mostly Groups of LUNS to become a pool, LUNs are Chunks of a Physical disk from target server. We can use LUNS as our systems Physical Disk to install Operating systems, LUNS are used in Clusters, Virtual servers, SAN etc. The main purpose of Using LUNS in Virtual servers for OS storage purpose. LUNS performance and reliability will be according to which kind of disk we using while creating a Target storage server. 继续阅读

How to Install and Configure HAProxy on CentOS/RHEL 7/6/5

HAProxy is a very fast and reliable solution for high availability, load balancing, It supports TCP and HTTP-based applications. Now a days most of websites need 99.999% uptime for there site, which are not possible with single server setup. Then we need some high availability environment which can easily manage with single server failure. 继续阅读

IP Range To CIDR Convertor

IP Range To CIDR Convertor

// Convert a given Ip range to CIDR notation.

# cat rangeToCidr
/* rangeToCidr.c - Convert Ip ranges to CIDR */

/*
modification history http://snippets.dzone.com/tag/cidr
--------------------
,17sep08,karn written
*/

/* includes */

#include
#include
#include
#include #include
#include
#include
#include

/* defines */
//#define DBG
#ifdef DBG
#define DEBUG(x) fprintf(stderr,x)
#else
#define DEBUG
#endif /* DBG */

#define IP_BINARY_LENGTH 32+1 /* 32 bits ipv4 address +1 for null */
#define IP_HEX_LENGTH 10
#define MAX_CIDR_MASK 32
#define MAX_CIDR_LEN 18+1 /*255.255.255.255/32*/

/* Forward declaratopms */
void rangeToCidr(uint32_t from ,uint32_t to,
void (callback)(char *cidrNotation));
int ipToBin(uint32_t ip , char * pOut);

void printNotation(char *cidrNotation);

/* Globals */

/*******************************************************************************
*
* ipToBin - convert an ipv4 address to binary representation
* and pads zeros to the beginning of the string if
* the length is not 32
* (Important for ranges like 10.10.0.1 - 20.20.20.20 )
*
* ip - ipv4 address on host order
* pOut - Buffer to store binary.
*
* RETURNS: OK or ERROR
*/

int ipToBin(uint32_t ip , char * pOut)
{
char hex[IP_HEX_LENGTH];
int i;
int result=0;
int len;
char pTmp[2];
int tmp;
/*
* XXX: Could use bit operations instead but was easier to debug
*/
char binMap[16][5] = {
"0000","0001","0010","0011", "0100",
"0101","0110","0111","1000", "1001",
"1010","1011","1100", "1101","1110","1111",
};
pTmp[1]=0x0;
memset(hex,0x0,sizeof(hex));
len=sprintf(hex,"%x",ip);

for(i=0;i IP_BINARY_LENGTH-1)
return -1;

/* Success */
return 0;
}

/*******************************************************************************
* main :
*
* arg1 : Start Ip Address
* arg2 : End Ip address
*/

int main (int argc,char **argv)
{
long fromIp, toIp;
struct in_addr addr;
if(argc !=3 )
{
printf("Usage: %s \n",argv[0]);
return(0);
}

/* All operation on host order */
if (inet_aton(argv[1],&addr) == 0)
goto error;
fromIp = ntohl(addr.s_addr);

if (inet_aton(argv[2],&addr) ==0)
goto error;
toIp = ntohl(addr.s_addr);

rangeToCidr(fromIp,toIp,printNotation);

return 0;
error:
printf("Invalid Argument\n");
return -EINVAL;
}

/*******************************************************************************
*
* rangeToCidr - convert an ip Range to CIDR, and call 'callback' to handle
* the value.
*
* from - IP Range start address
* to - IP Range end address
* callback - Callback function to handle cidr.
* RETURNS: OK or ERROR
*/

void rangeToCidr(uint32_t from ,uint32_t to,
void (callback)(char *cidrNotation))
{
int cidrStart = 0;
int cidrEnd = MAX_CIDR_MASK - 1;
long newfrom;
long mask;
char fromIp[IP_BINARY_LENGTH];
char toIp[IP_BINARY_LENGTH];
struct in_addr addr;
char cidrNotation[MAX_CIDR_LEN];

memset (fromIp,0x0,sizeof(fromIp));
memset (toIp,0x0,sizeof(toIp));

if ( ipToBin(from,fromIp) != 0 )
return;
if ( ipToBin(to,toIp) != 0 )
return;

DEBUG ("from %lu to %lu\n", from,to);
DEBUG("from %s\n",fromIp);
DEBUG("to %s\n",toIp);

if(from < to ) { /* Compare the from and to address ranges to get the first * point of difference */ while(fromIp[cidrStart]==toIp[cidrStart]) cidrStart ++; cidrStart = 32 - cidrStart -1 ; DEBUG("cidrStart is %u\n",cidrStart); /* Starting from the found point of difference make all bits on the * right side zero */ newfrom = from >> cidrStart +1 << cidrStart +1 ; /* Starting from the end iterate reverse direction to find * cidrEnd */ while( fromIp[cidrEnd] == '0' && toIp[cidrEnd] == '1') cidrEnd --; cidrEnd = MAX_CIDR_MASK - 1 - cidrEnd; DEBUG("cidrEnd is %u\n",cidrEnd); if(cidrEnd <= cidrStart) { /* * Make all the bit-shifted bits equal to 1, for * iteration # 1. */ mask = pow (2, cidrStart ) - 1; DEBUG("it1 is %lu \n",newfrom | mask ); rangeToCidr (from , newfrom | mask, callback); DEBUG("it2 is %lu \n",newfrom | 1 << cidrStart); rangeToCidr (newfrom | 1 << cidrStart ,to ,callback); } else { addr.s_addr = htonl(newfrom); sprintf(cidrNotation,"%s/%d", inet_ntoa(addr), MAX_CIDR_MASK-cidrEnd); if (callback != NULL) callback(cidrNotation); } } else { addr.s_addr = htonl(from); sprintf(cidrNotation,"%s/%d",inet_ntoa(addr),MAX_CIDR_MASK); if(callback != NULL) callback(cidrNotation); } } /******************************************************************************* * * printNotation - This is an example callback function to handle cidr notation. * * RETURNS: */ void printNotation(char *cidrNotation) { printf("%s\n",cidrNotation); }
编译:

# gcc rangeToCidr.c -lm -o rang2cidr

Perl版本:

#!/usr/bin/perl -w
# range2cidr.pl

use Net::CIDR;
use Net::CIDR ':all';

if (@ARGV == 0) {
die "Usage Example: $0 192.168.0.0-192.168.255.255 \n";
}

print join("\n", Net::CIDR::range2cidr("$ARGV[0]")) . "\n";

合并CIDR:

#!/usr/bin/perl

use Net::CIDR::Lite;

my $cidr = Net::CIDR::Lite->new;

$cidr->add("202.38.175.0/24");
$cidr->add("202.38.174.0/24");
$cidr->add("202.38.173.0/24");
$cidr->add("202.38.172.0/24");
$cidr->add("202.38.171.0/24");
$cidr->add("202.38.170.0/24");
$cidr->add("202.38.169.0/24");
$cidr->add("202.38.168.0/24");

print "$_\n" for $cidr->list;
// 执行结果:202.38.168.0/21

Linux 系统中一些针对文件系统的节能技巧

文件系统是 Linux 系统的重要组成部分,文件系统的配置和使用对整个系统的运行有着重要的影响。本文介绍了一些 Linux 系统上对文件系统的配置技巧,达到节省能耗并目的,有的技巧还可以提高系统的性能。虽然文件系统的节能成效比起 CPU 和显示器的节能来显得比较轻微,但是积少成多,绿色的地球将靠我们一点一滴来完成。

本文假设用户的主要文件系统驻留在硬盘之上。硬盘是系统中相对于 CPU、内存等设备来说活动时间比较少的部件。如果硬盘处于空闲状态时,耗电量是很少的;而在启动进行读写的时候,耗电量会大大增加。所以通过文件系统节能的核心思想就是,尽量减少磁盘 I/O,使硬盘更多的处于空闲状态。 继续阅读

LINUX内存监控

 #!/bin/bash    
    #  
    # 监控内存使用状态 以便做出响应,可以添加到/etc/rc.local中作为守护进程脚本运行    
    #  
    # free#              total       used       free     shared    buffers     cached  
    # Mem:       2074716     702972    1371744          0     123612     478028  
    # -/+ buffers/cache:     101332    1973384  
    # Swap:      4088532          0    4088532   
    #  
    MINRATIO="0.05" 
    #while true  
    while :  
    do  
        MemTotal=`free|grep "Mem"|awk '{ print $2 }'`  
        MemFree=`free|grep "Mem"|awk '{ print $4 }'`  
        Result=`echo | awk '{ print "'$MemFree'" / "'$MemTotal'" }'`  
        RetVal=`awk 'BEGIN { print ("'$Result'" < "'$MINRATIO'"); }'`  
     
        if [ ${RetVal} -eq 1 ]; then  
        #    echo "Restart Apache"  
            /usr/local/apache/bin/apachectl restart  
        fi  
        sleep 60  
    done 

继续阅读

apache2.2.X负载均衡

在 某些场景中,我们需要在前端放置一个Apache作为负载均衡器,后台有若干台Apusic或者其它的类似于Tomcat/WebLogic等应用服务 器,客户端发送到Apache的请求,将被分配到后台的这些真正完成请求的服务器上。本文描述如何使用Apache作为负载均衡器的方法和不同的负载均衡 的配置。

       在某些场景中,我们需要在前端放置一个Apache作为负载均衡器,后台有若干台Apusic或者其它的类似于Tomcat/WebLogic等应用服务 器,客户端发送到Apache的请求,将被分配到后台的这些真正完成请求的服务器上。本文描述如何使用Apache作为负载均衡器的方法。

       我们假设Apahce安装在 myserver 这台服务器上,并且希望用户访问http://myserver/ 时,能够将这些请求被负载到后台的两台服务器上,分别是:http://192.168.6.37:8080/ 和 http://192.168.6.37:6888/
一、安装并重新编译Apache

1、linux下Apache的安装

下载最新的Apache安装包httpd-2.2.3.tar.gz文件

1) 解压

         gzip –d httpd-2.2.3.tar.gz
         tar xvf httpd-2.2.3.tar

2) 解压以后,cd httpd-2.2.3 进入解压后的目录,在终端执输入以下命令:
./configure --prefix=/usr/local/httpd --enable-so --enable-proxy --enable-proxy-ajp --enable-proxy-http --enable-proxy-ftp --enable-proxy-connect --enable-proxy-balancer
      默认情况下,Apache安装是不会将这些文件编译进内核,因此,需要人工加载,而通过上述操作,在编译时会将这些DSO文件编译到内核中。

3) 在终端输入:make

4) 在终端输入:make install

5) 进入Apache的bin目录,在终端输入apachectl –k start

6) 在浏览器中输入http://myserver,默认是80端口,如果出现It works!,说明Apache已经正常启动。

2、window下Apache的安装

1) 下载apache_2.2.2-win32-x86-no_ssl.msi版本或其他版本的apache
2) 点击该文件,就可以直接安装
3) 要配置负载均衡,进入apache的安装目录下的conf目录,打开httpd.conf文件,将文件中mod_proxy.so、 mod_proxy_ajp.so、mod_proxy_balancer.so、 mod_proxy_connect.so、  mod_proxy_http.so 、mod_proxy_ftp.so所在行的注释去掉,就可以进行负载均衡的配置。

4) 在浏览器中输入http://myserver,默认是80端口,如果出现It works!,说明Apache已经正常启动。

二、配置Apache作为LoadBalance

将Apache作为LoadBalance前置机分别有三种不同的部署方式,分别是: 1)轮询均衡策略的配置

进入Apache的conf目录,打开httpd.conf文件,在文件的末尾加入:

    ProxyPass / balancer://proxy/         #注意这里以"/"结尾 
    <Proxy balancer://proxy> 
           BalancerMember http://192.168.6.37:6888/ 
           BalancerMember http://192.168.6.38:6888/ 
    </Proxy>  

      我们来观察上述的参数“ProxyPass / balancer://proxy/”,其中,“ProxyPass”是配置虚拟服务器的命令,“/”代表发送Web请求的URL前缀,如:http: //myserver/或者http://myserver/aaa,这些URL都将符合上述过滤条件;“balancer://proxy/”表示要配 置负载均衡,proxy代表负载均衡名;BalancerMember 及其后面的URL表示要配置的后台服务器,其中URL为后台服务器请求时的URL。以上面的配置为例,实现负载均衡的原理如下:

      假设Apache接收到http://localhost/aaa请求,由于该请求满足ProxyPass条件(其URL前缀为“/”),该请求会被分发 到后台某一个BalancerMember,譬如,该请求可能会转发到 http://192.168.6.37:6888/aaa进行处理。当第二个满足条件的URL请求过来时,该请求可能会被分发到另外一台 BalancerMember,譬如,可能会转发到http://192.168.6.38:6888/。如此循环反复,便实现了负载均衡的机制。

2)按权重分配均衡策略的配置

    ProxyPass / balancer://proxy/         #注意这里以"/"结尾 
    <Proxy balancer://proxy> 
            BalancerMember http://192.168.6.37:6888/  loadfactor=3 
            BalancerMember http://192.168.6.38:6888/  loadfactor=1 
    </Proxy>  

      参数”loadfactor”表示后台服务器负载到由Apache发送请求的权值,该值默认为1,可以将该值设置为1到100之间的任何值。以上面的配置 为例,介绍如何实现按权重分配的负载均衡,现假设Apache收到http://myserver/aaa 4次这样的请求,该请求分别被负载到后台服务器,则有3次连续的这样请求被负载到BalancerMember为 http://192.168.6.37:6888的服务器,有1次这样的请求被负载BalancerMember为http: //192.168.6.38:6888后台服务器。实现了按照权重连续分配的均衡策略。

3)权重请求响应负载均衡策略的配置

    ProxyPass / balancer://proxy/ lbmethod=bytraffic  #注意这里以"/"结尾 
    <Proxy balancer://proxy> 
             BalancerMember http://192.168.6.37:6888/  loadfactor=3 
             BalancerMember http://192.168.6.38:6888/  loadfactor=1 
     </Proxy>  

       参数“lbmethod=bytraffic”表示后台服务器负载请求和响应的字节数,处理字节数的多少是以权值的方式来表示的。 “loadfactor”表示后台服务器处理负载请求和响应字节数的权值,该值默认为1,可以将该值设置在1到100的任何值。根据以上配置是这么进行均 衡负载的,假设Apache接收到http://myserver/aaa请求,将请求转发给后台服务器,如果BalancerMember为http: //192.168.6.37:6888后台服务器负载到这个请求,那么它处理请求和响应的字节数是BalancerMember为http: //192.168.6.38:6888 服务器的3倍(回想(2)均衡配置,(2)是以请求数作为权重负载均衡的,(3)是以流量为权重负载均衡的,这是最大的区别)。

注:每次修改httpd.conf,用apachectl –k restart重新启动Apache。

**********************************************************

随 着访问量的不断提高,以及对响应速度的要求,进行负载均衡设置就显得非常必要了。公司的系统在最初设计的时候就已经考虑到了负载均衡的规划,www静态服 务器配置了两台,由于初期项目时间紧,并且访问量并不高,所以当时只用了一台,另一台在内网中,只是进行了同步,并为发挥出效用来。此次就是对负载均衡的 一个简单测试。
先介绍一下apache mod_proxy_balancer的几个配置规则(从网上找的):
将Apache作为LoadBalance前置机分别有三种不同的部署方式,分别是:

1)轮询均衡策略的配置

    ProxyPass / balancer://proxy/         #注意这里以"/"结尾 
    <Proxy balancer://proxy>  
           BalancerMember http://192.168.6.37:6888/  
           BalancerMember http://192.168.6.38:6888/ 
    </Proxy>  

      我们来观察上述的参数“ProxyPass / balancer://proxy/”,其中,“ProxyPass”是配置虚拟服务器的命令,“/”代表发送Web请求的URL前缀,如:http://myserver/或者http://myserver/aaa,这些URL都将符合上述过滤条件;“balancer://proxy/”表示要配置负载均衡,proxy代表负载均衡名;BalancerMember 及其后面的URL表示要配置的后台服务器,其中URL为后台服务器请求时的URL。以上面的配置为例,实现负载均衡的原理如下:
      假设Apache接收到http://localhost/aaa请求,由于该请求满足ProxyPass条件(其URL前缀为“/”),该请求会被分发到后台某一个BalancerMember,譬如,该请求可能会转发到 http://192.168.6.37:6888/aaa进行处理。当第二个满足条件的URL请求过来时,该请求可能会被分发到另外一台BalancerMember,譬如,可能会转发到http://192.168.6.38:6888/。如此循环反复,便实现了负载均衡的机制。

2)按权重分配均衡策略的配置

    ProxyPass / balancer://proxy/         #注意这里以"/"结尾 
    <Proxy balancer://proxy>  
            BalancerMember http://192.168.6.37:6888/  loadfactor=3  
            BalancerMember http://192.168.6.38:6888/  loadfactor=1 
    </Proxy>  

      参数”loadfactor”表示后台服务器负载到由Apache发送请求的权值,该值默认为1,可以将该值设置为1到100之间的任何值。以上面的配置 为例,介绍如何实现按权重分配的负载均衡,现假设Apache收到http://myserver/aaa 4次这样的请求,该请求分别被负载到后台服务器,则有3次连续的这样请求被负载到BalancerMember为http://192.168.6.37:6888的服务器,有1次这样的请求被负载BalancerMember为http://192.168.6.38:6888后台服务器。实现了按照权重连续分配的均衡策略。

    ProxyPass / balancer://proxy/ lbmethod=bytraffic  #注意这里以"/"结尾 
    <Proxy balancer://proxy>  
             BalancerMember http://192.168.6.37:6888/  loadfactor=3  
             BalancerMember http://192.168.6.38:6888/  loadfactor=1  
     </Proxy>  

       参数“lbmethod=bytraffic”表示后台服务器负载请求和响应的字节数,处理字节数的多少是以权值的方式来表示的。 “loadfactor”表示后台服务器处理负载请求和响应字节数的权值,该值默认为1,可以将该值设置在1到100的任何值。根据以上配置是这么进行均 衡负载的,假设Apache接收到http://myserver/aaa请求,将请求转发给后台服务器,如果BalancerMember为http://192.168.6.37:6888后台服务器负载到这个请求,那么它处理请求和响应的字节数是BalancerMember为http://192.168.6.38:6888 服务器的3倍(回想(2)均衡配置,(2)是以请求数作为权重负载均衡的,(3)是以流量为权重负载均衡的,这是最大的区别)。

看明白了没有,根据不同的需要,可以按这三种方式进行配置。我按照第三种配置的,感觉上这种对于负载的均衡更全面合理。我的配置很简单,如下:
先配置均衡器:

    <Proxy balancer://proxy> 
           BalancerMember ajp://127.0.0.1:8009/  loadfactor=1 
           BalancerMember http://192.168.10.6:8083/  loadfactor=1 
    </Proxy> 

其中http://192.168.10.6:8083实际上是另外一个端口启动的apache,为了测试,它就简单的直接转发所有请求到tomcat。
对于上次的VirtualHost进行以下的修改即可:

    <VirtualHost *:80> 
            ServerName www.test.com 
            DocumentRoot /www 
            DirectoryIndex index.html index.jsp 
            <Directory "/www"> 
                Options Indexes FollowSymLinks 
                AllowOverride None 
                Order allow,deny 
                Allow from all 
            </Directory> 
            <Directory "/control"> 
                Options Indexes FollowSymLinks 
                AllowOverride None 
                Order allow,deny 
                Allow from all 
            </Directory> 
            ProxyPass /nxt/images/ ! 
            ProxyPass /nxt/js/ ! 
            ProxyPass /nxt/css/ ! 
            #ProxyPass / ajp://127.0.0.1:8009/ 
            #ProxyPassReverse / ajp://127.0.0.1:8009/ 
            ProxyPass / balancer://proxy/ 
            ProxyPassReverse / balancer://proxy/ 
    </VirtualHost> 

注释掉之前的ajp转发,而配置成通过balancer去处理。
通过观察access log,的确有部分请求发送到了8083端口的apache上,而有部分是直接ajp转发到tomcat上了。对于更多的负载均衡的参数检测,待空了再做。