ffmpeg-php error

While compiling ffmpeg-php:-

Error: /usr/src/ffmpeg-php-0.6.0/ffmpeg_frame.c: In function ‘zif_ffmpeg_frame_toGDImage’: /usr/src/ffmpeg-php-0.6.0/ffmpeg_frame.c:336: error: ‘PIX_FMT_RGBA32′ undeclared (first use in this function) /usr/src/ffmpeg-php-0.6.0/ffmpeg_frame.c:336: error: (Each undeclared identifier is reported only once /usr/src/ffmpeg-php-0.6.0/ffmpeg_frame.c:336: error: for each function it appears in.) /usr/src/ffmpeg-php-0.6.0/ffmpeg_frame.c: In function ‘zif_ffmpeg_frame_ffmpeg_frame’: /usr/src/ffmpeg-php-0.6.0/ffmpeg_frame.c:421: error: ‘PIX_FMT_RGBA32′ undeclared (first use in this function)


Fix: With the latest version of ffmpeg-php (0.6.0), update ffmpeg_frame.c and replace every instance of PIX_FMT_RGBA32 with PIX_FMT_RGB32

vi ffmpeg_frame.c


:w :q!

./configure make make install add extension=”ffmpeg.so” inside php.ini .



Services which contain libwrap module can use hosts.deny to control Access
ldd  /usr/sbin/vsftpd    |grep libwrap
ldd  /usr/sbin/sendmail  |grep libwrap
ldd  /usr/sbin/sshd      |grep libwrap

To Restrict a host/network  to control access to a Service.

1.  Using Hostname/Domainname
vim /etc/hosts.deny
vsftpd  .example.com                     ->All hosts in example.com domain denied to access ftp
vsftpd  server.example.com                ->Host server in example.com denied to access

2.  Using  Ipaddress/Network
vim /etc/hosts.deny
vsftpd         ->All hosts in 1.0 N/W denied.
vsftpd                       ->Host 1.4 denied.

3.  To  Deny all Except few
vim /etc/hosts.deny
sshd: ALL  EXCEPT   matrix.com            ->Any domain other than matrix.com are denied the Access to ssh.

4. To  Allow all Except few
vim /etc/hosts.allow
ALL:  .example.com  EXCEPT  cracker.example.com  ->All example.com hosts are allowed to connect to all services except cracker.example.com.

Both entries allow/deny can be given in either hosts.allow or hosts.deny file

Configure a Linux-HA high avaliability heartbeat cluster

In this example we will configure a webserver using apache and we will cluster it. It can be implemented on centos, fedora and other redhat flavors.

Pre-Configuration Requirements

Following are the hostnames and ipv4 addresses that will be used:

  • prime ( webserver)
  • calc (webserver)
  • sigma (ha address)


1. Download and install the heartbeat package. In our case we are using CentOS so we will install heartbeat with yum:

yum install heartbeat

or download these packages:


2. Now we have to configure heartbeat on our two node cluster. We will deal with three files. These are:

  1. /etc/ha.d/ha.cf: protocol, server options and servers.
  2. /etc/ha.d/authkeys: shared keysfile
  3. /etc/ha.d/resources: resource definitions


For the example setup the ha.cf file looks like the following:

debugfile /var/log/ha-debug
logfile /var/log/ha-log
logfacility     local0
keepalive 2
deadtime 10
udpport 694
bcast     eth0
node    prime
node    calc
auto_failback on

The above options are pretty straightforward; where the debuglog is, logfile, what level, tcp keepalive in seconds, deadtime in between in seconds, what udp port, what interface to broadcast on then the nodes in the cluster.


The documentation explains the various options but for this example we are using sha1 algorithm:

#vi authkeys
edit as follows
auth 2
#1 crc
2 sha1 test-ha
#3 md5 Hello!

Also the authkeys file must be read only root:

chmod 0600 authkeys


The resources file dictates the shared address and services in init to startup (or shutdown as the case may be):

prime apache2

The starting or primary server is put as the first argument. Now the the configuration is done on the primary server – the exact same settings can be used on the secondary one.

 Copy the /etc/ha.d/ directory from node01 to node02:

scp -r /etc/ha.d/ root@calc:/etc/

3.  Now exchange and save authorized keys between node1 and node2.
Key exchange:

On node1:

Generate the key:

[root@prime ~]# ssh-keygen -t dsa
Generating public/private dsa key pair.
Enter file in which to save the key (/root/.ssh/id_dsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_dsa.
Your public key has been saved in /root/.ssh/id_dsa.pub.
The key fingerprint is:
9f:5d:47:6b:2a:2e:c8:3e:ee:8a:c2:28:5c:ad:57:79 root@prime

Pass the key to node2:
[root@prime ~]# scp .ssh/id_dsa.pub calc:/root/.ssh/authorized_keys

On node2:

Generate the key:

[root@calc ~]# ssh-keygen -t dsa
Generating public/private dsa key pair.
Enter file in which to save the key (/root/.ssh/id_dsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_dsa.
Your public key has been saved in /root/.ssh/id_dsa.pub.
The key fingerprint is:
40:66:t8:bd:ac:bf:68:38:22:60:d8:9f:18:7d:94:21 root@calc

Pass the key to node1:
[root@calc ~]# scp .ssh/id_dsa.pub prime:/root/.ssh/authorized_keys

NOTE: We don’t need to create a virtual network interface and assign an IP address ( to it. Heartbeat will do this for you, and start the service (httpd) itself. So don’t worry about this.

4. A basic apache server for the test is required as well:

 #yum install httpd*

To illustrate the test, a simple page on each webserver with its hostname can be used and put into /var/www/html/index.html:


Next – startup and set to start at boot the webservers (run on both systems):

service apache2 start
chkconfig apache2 on

Now time to test the systems separately with lynx --dump:

# lynx --dump prime

# lynx --dump calc

5. On both nodes:

#vi /etc/httpd/conf/httpd.conf

Firing it Up

Starting up is pretty simple:

# chkconfig heartbeat on
# service heartbeat start
Starting High-Availability services2009/07/25_21:04:30 INFO:  \
        Resource is stopped
heartbeat[4071]: 2009/07/25_21:04:30 info: Version 2 support: false
heartbeat[4071]: 2009/07/25_21:04:30 info: **************************
heartbeat[4071]: 2009/07/25_21:04:30 info: \
        Configuration validated. Starting heartbeat 2.99.3

Now a litmus test of the shared address:

#  lynx --dump


Testing can be a little tricky – the simplest way is to stop the heartbeat service on the active node and let the other one take over, observe the log entries on the calc node:

IPaddr[5106]:   2009/07/25_21:32:55 INFO: eval \
        ifconfig eth0:0 netmask broadcast
IPaddr[5089]:   2009/07/25_21:32:55 INFO:  Success
ResourceManager[5006]:  2009/07/25_21:32:55 \
        info: Running /etc/init.d/apache2  start
mach_down[4980]:        2009/07/25_21:32:58 info: \
        mach_down takeover complete for node prime.
heartbeat[4241]: 2009/07/25_21:33:05 WARN: node prime: is dead
heartbeat[4241]: 2009/07/25_21:33:05 info: Dead node prime gave up resources.
heartbeat[4241]: 2009/07/25_21:33:05 info: Resources being acquired from prime.
heartbeat[4241]: 2009/07/25_21:33:05 info: Link prime:eth0 dead.
harc[5258]:     2009/07/25_21:33:06 info: Running /etc/ha.d/rc.d/status status
heartbeat[5259]: 2009/07/25_21:33:06 info: \
        No local resources [/usr/share/heartbeat/ResourceManager \
        listkeys calc] to acquire.
mach_down[5287]:        2009/07/25_21:33:06 info: \
        Taking over resource group
ResourceManager[5313]:  2009/07/25_21:33:06 \
        info: Acquiring resource group: prime apache2
IPaddr[5340]:   2009/07/25_21:33:06 INFO:  Running OK
mach_down[5287]:        2009/07/25_21:33:07 \
        info: mach_down takeover complete for node prime.

And a quick check with lynx:

#  lynx --dump

Note that once prime is back online that calc gives control back:

ResourceManager[5515]:  2009/07/25_21:33:43 info: \
        Releasing resource group: prime apache2
ResourceManager[5515]:  2009/07/25_21:33:43 info: \
        Running /etc/init.d/apache2  stop
ResourceManager[5515]:  2009/07/25_21:33:44 info: \
        Running /etc/ha.d/resource.d/IPaddr stop
IPaddr[5592]:   2009/07/25_21:33:44 INFO: ifconfig eth0:0 down
IPaddr[5575]:   2009/07/25_21:33:44 INFO:  Success

Don’t use the IP addresses and for services. These addresses are used by heartbeat for communication between node01 and node02. When any of them will be used for services/resources, it will disturb hearbeat and will not work. Be carefull!!!

ulimit in CentOS

By default the ulimit is set to 1024 only, first u need to increase system wise with

“fs.file-max = 65536″

adding this one in sysctl.conf

and then “sysctl -p”

switch to /etc/securitty/limits.conf and add ther following lines

* hard nofile 65536
* soft nofile 16384

‘su’ to  user for which u need to increase the file-max for, with this the default for all users will be 16384,

u can increase with ulimit -n XXXX now