In a normal home environment no one uses a log server as it does not have any practical usage - or does it?
On a enterprise environment it is natural that administrator(s) once in a while look through the servers once in a while (or at least when something crashes they give a quick checkup). In big enterprises it is natural that there is a central log host that collects all the logs into one place and a team of security professionals that audit the logs. The good side of having a log server is that you can correlate events over an extended pattern - for example you have smart hacker that has time to crack a root password. He will never run a open brute force attack on your system openly. He will probe your host maybe once a day from one IP address a day, but he will most likely have other hosts also so basically there can be hundreds if not thousands of attempts per day from different hosts at different times. In a singe day you may get around one or more attempts from single IP. The events will be at different times so it would look natural that some user just "accidentally" entered a wrong IP or a host name. If this happens frequently enough the logs get big enough for rotation and most likely the will rotate daily. So in 1 days time frame nothing suspicious has not happened and it is not likely that the administrators will look into the rotated logs (if they can - by default only 4 rotations are kept). This will provide an attacker the perfect cover and all he has to do is wait and hope that one of the passwords works.
Another type of attack that has become used lately is smurffed brute forcing. In this scenario a company has several servers visible to the outside world and each server or a group of servers are administered by different administrators. In this case the attacker just tries few times a day from a single IP against each server and if the communication between administrators is not working perfectly and one of the administrators is not paranoid enough to start poking in the logs it will also be unnoticed. Once one servers administrator account is compromised, installing a root kit and getting into the other servers is peace of cake.
Now, if the enterprise has set up a central log server (or a server cluster) all logs are in once place for auditing. Allso extra security is provided as the log server can be firewalled except for the logging ports so hacking the log server is harder and if a host stops login is a perfect reason to audit the server. There are several ways to achieve perfection:
log all messages into daily logs - the down side is that everything is scrambled and finding a pattern from the mess is hard
separate logs by host - better but still not perfect, as logs are still a bit of a mess and in long term grow big and slow
separate logs by host and facility - the logs will have good quality, but again the logs will grow too big and at some time have to be dumped
separate logs by host, year, month, day and facility - perfect for small to medium enterprises as you have logs that go back long time, are properly sorted and separated, new logs are created when day/month/year changes so basically it gives a good overview of the hosts status at given date also searching for a pattern from a log can be extended over several hosts
separate logs by host, year, month, day, facility and provide extra daily dumps of data from all hosts and all facilities - this method will give an extra feature not available in previous methods - the dumps can be imported into a SQL database to speed up searching and pattern matching, witch also allows the use of running previously created patterns on the logs. The drawbacks are that hard drive space is wasted on duplicates (yes keeping the duplicates is useful as one file is used for SQL dumps and other for real time auditing) also the analysis of logs can be run on dumps from yesterdays data. The storage issues can be overcome if the dumps are imported into a temporary database and afterwards moved into proper database from where there are backups also.
separate logs by host, year, month, day, facility, daily full dumps and real time database logging - this method is the mother of all. It is the most complex and has the most features available. All logs are saved into files because if the database connection is dropped nothing is lost. All logs are pushed into database where almost real time analysis can be performed with complex patterns. The daily logs can be run trough real time event correlation application that searches predefined patterns from logs and generates events on based the rules.
Witch is good for some is dependent on the needs of the company, but all work well and can are dependent on the creativity of the system administrator.
There is one dependence that the central log server needs - all the clocks have to be in sync down to a second and they have to be in the same timezone, as logs with different time are useless. One server reports that a hack happened 01:00 the other server reports that the hacked server started to hack it 06:02. Naturally it raises a question - what happened between 01:00 and 06:02? What has been done in the first server that took 5 hours? In reality the server 1 got hacked 01:00 and the hacker took only 2 minutes to root kit the server and start hacking the server 2, but with so big time difference its hard to get a real view of the events that happened.
So You have read so far and you ask - why am I blabbing about central log server in enterprise environments when I started to talk about home systems? The answer is simple - I have a central log server implemented at home also and I did some statistics today.
As I'm not a regular user I have my systems secured pretty well, but logs tell their own story. Those statistics is created purely on basis of ssh based attacks and no other services:
2004 - 29 attempts total
2005 - 71 attempts total
2006 - 194 attempts total
2007-10-24- 3794 attempts total
All that the statistics show are distributed attacks from different hosts and I don't even want to estimate what will happen next year if the attack curve stays the same. Central logging is not a magic cure against hacking but it can give powerful tools to help diagnose a lot of problems.
Blogged with Flock