Google Search

Google

Thursday, October 25, 2007

Central logserver - vol 1 - the basics...


In a normal home environment no one uses a log server as it does not have any practical usage - or does it?

On a enterprise environment it is natural that administrator(s) once in a while look through the servers once in a while (or at least when something crashes they give a quick checkup). In big enterprises it is natural that there is a central log host that collects all the logs into one place and a team of security professionals that audit the logs. The good side of having a log server is that you can correlate events over an extended pattern - for example you have smart hacker that has time to crack a root password. He will never run a open brute force attack on your system openly. He will probe your host maybe once a day from one IP address a day, but he will most likely have other hosts also so basically there can be hundreds if not thousands of attempts per day from different hosts at different times. In a singe day you may get around one or more attempts from single IP. The events will be at different times so it would look natural that some user just "accidentally" entered a wrong IP or a host name. If this happens frequently enough the logs get big enough for rotation and most likely the will rotate daily. So in 1 days time frame nothing suspicious has not happened and it is not likely that the administrators will look into the rotated logs (if they can - by default only 4 rotations are kept). This will provide an attacker the perfect cover and all he has to do is wait and hope that one of the passwords works.

Another type of attack that has become used lately is smurffed brute forcing. In this scenario a company has several servers visible to the outside world and each server or a group of servers are administered by different administrators. In this case the attacker just tries few times a day from a single IP against each server and if the communication between administrators is not working perfectly and one of the administrators is not paranoid enough to start poking in the logs it will also be unnoticed. Once one servers administrator account is compromised, installing a root kit and getting into the other servers is peace of cake.

Now, if the enterprise has set up a central log server (or a server cluster) all logs are in once place for auditing. Allso extra security is provided as the log server can be firewalled except for the logging ports so hacking the log server is harder and if a host stops login is a perfect reason to audit the server. There are several ways to achieve perfection:
log all messages into daily logs - the down side is that everything is scrambled and finding a pattern from the mess is hard
separate logs by host - better but still not perfect, as logs are still a bit of a mess and in long term grow big and slow
separate logs by host and facility - the logs will have good quality, but again the logs will grow too big and at some time have to be dumped
separate logs by host, year, month, day and facility - perfect for small to medium enterprises as you have logs that go back long time, are properly sorted and separated, new logs are created when day/month/year changes so basically it gives a good overview of the hosts status at given date also searching for a pattern from a log can be extended over several hosts
separate logs by host, year, month, day, facility and provide extra daily dumps of data from all hosts and all facilities - this method will give an extra feature not available in previous methods - the dumps can be imported into a SQL database to speed up searching and pattern matching, witch also allows the use of running previously created patterns on the logs. The drawbacks are that hard drive space is wasted on duplicates (yes keeping the duplicates is useful as one file is used for SQL dumps and other for real time auditing) also the analysis of logs can be run on dumps from yesterdays data. The storage issues can be overcome if the dumps are imported into a temporary database and afterwards moved into proper database from where there are backups also.
separate logs by host, year, month, day, facility, daily full dumps and real time database logging - this method is the mother of all. It is the most complex and has the most features available. All logs are saved into files because if the database connection is dropped nothing is lost. All logs are pushed into database where almost real time analysis can be performed with complex patterns. The daily logs can be run trough real time event correlation application that searches predefined patterns from logs and generates events on based the rules.

Witch is good for some is dependent on the needs of the company, but all work well and can are dependent on the creativity of the system administrator.

There is one dependence that the central log server needs - all the clocks have to be in sync down to a second and they have to be in the same timezone, as logs with different time are useless. One server reports that a hack happened 01:00 the other server reports that the hacked server started to hack it 06:02. Naturally it raises a question - what happened between 01:00 and 06:02? What has been done in the first server that took 5 hours? In reality the server 1 got hacked 01:00 and the hacker took only 2 minutes to root kit the server and start hacking the server 2, but with so big time difference its hard to get a real view of the events that happened.

So You have read so far and you ask - why am I blabbing about central log server in enterprise environments when I started to talk about home systems? The answer is simple - I have a central log server implemented at home also and I did some statistics today.

As I'm not a regular user I have my systems secured pretty well, but logs tell their own story. Those statistics is created purely on basis of ssh based attacks and no other services:
2004 - 29 attempts total
2005 - 71 attempts total
2006 - 194 attempts total
2007-10-24- 3794 attempts total

All that the statistics show are distributed attacks from different hosts and I don't even want to estimate what will happen next year if the attack curve stays the same. Central logging is not a magic cure against hacking but it can give powerful tools to help diagnose a lot of problems.

Blogged with Flock

Sunday, October 14, 2007

Problems with bluez and bluetooth headset...

For some time I have been tinkering with my Jabra BT500v and the bluez bluetooth stack. Under SuSE I had only one way of getting it work - bluetooth-alsa with all the extra packages to get it to work (sbc, plugz, btsco, the snd-bt-sco kernel module, and the HCI SCO patch for the kernel). All was simple/(err - possible) and kinda worked (patching the kernel was a pain tho). The bluez was really old (2.19 if I remember correctly) so there were no choices really.

When I moved to Fedora 7 I had everything set to rebuild everything necessary to get it running again. As Fedora 7 should have a new BT stack I would not have to do all the things again as starting from 3.16 you don't need them any more. To my surprise I discovered that Fedora 7 had only 3.9 version of bluez. Not a problem - rebuild with the latest sources - 3.20 is the latest so should be the best so it is a prime candidate.

At the beginning I thought everything was fine (well, after tinkering with the spec file a little). Well I got the amarok to play, I got vlc to play, I got the mplayer to play but what I did not get to play is Skype. The thing I was trying to use the headset with. The thing to what the headset was meant to be used for.

Another blow to my ego was today when I updated my system today and got a kdebluetooth update (1.0 beta 8). After a boot it could not even start - just plain sigserv at the login. Right. The first thing that comes to my mind is that kdebluetooth is liked against an older version of the api that is been redesigned or removed from new version. Tho the interesting part is that the older version worked fine with new bluez, so I'm a bit confused at the moment. Anyway I went back to the 3.9 bluez until I figure out which is the reason for the crashes.

For the moment I have 43 self built/patched rpm's running on my system and no way to track them expect the gray matter called the brain. With every rpm I customize/upgrade for my needs grows the need to track the mainstream for bug fixes and security fixes. With this hack I tried to ease my life by replacing the tracking from 6 to 2. But it seems Ill have to do it the old way or wait for Fedora 8 to come out to make my life little easier. Then again if I can't get Skype running with new bluez I have to go back again. Anyway I'm back in the point where I started.

Anyway hers some links that can help  (or make life more complicated): here, here, here, here, here, here, here, here, here and here.

If I figure out how to how to get it running Ill add another post.

Blogged with Flock

Thursday, October 11, 2007

Resurrecting old hardware...

Hardware is getting faster and faster, so people upgrade as the needs get bigger and bigger. What about the old hardware that stays behind? Some of it gets reused for low priority servers but most of them get thrown away as obsolete. The worst case is if the hardware itself has become obsolete aka its maxed out when it was current.

Well I happened to be involved in one of this type of resurrections. The server was an IBM xSeries 220 tower with 72GB hardware raid 1 array and a small 40GB IDE drive. The idea was to upgrade the small IDE to a 160GB drive to get more storage area. As the 220 has show insane stability the idea was well at the beginning, well until the new drive was installed and well, the Linux recognized it. But here is where the problems started. The performance was insane - the drive would max out at 500KB/s with the read timings test (hdparm -t /dev/hda). The other symptom was drive seek errors which would indicate that there were bad blocks at the beginning or supernodes which would mean that the drive was dieing.

As the Maxtor's drive was brand new (just out of the anti static bag) it could not be the case. After a test with a second 160GB drive the results were the same. Logically thinking - this means we have hit the LBA32 127GB limit. The BIOS cant address more address space than 127GB and when you do you hit the brick wall of buffer overflows in the address registers (most likely when you write over 127GB you start from the 0x0 h again and write up the drives beginning). Of course the Linux kernel takes over the drive addressing after you boot up, but for that to work well the BIOS has to be satisfied first. Well copying 127GB at the rate of 500KB/s would take 266 338 seconds - thats around 3 days straight - I didn't have that much time to wait to see what would happen after the limit gets exceeded.

So what to do when your hardware is too obsolete for upgrading to your needs? You hack it!

In my case it was simple as Maxtor has firmware utility called SeaTools that can change the drive capacity in the firmware. Why is that good you ask? Well this way we can fake the hardwares without using the soldering iron or assembler to hack the firmware in the device.

The procedure goes like this:
  1. install the new HDD
  2. boot into BIOS and accept the changes (new HDD 6GB?!?!?)
  3. boot into the SeaTools utility (CD or floppy)
  4. Change the drives capacity to be 32GB
  5. Shutdown the machine and remove the drive
  6. Boot into the BIOS and accept the changes (the IDE drive has gone away)
  7. Shutdown the machine and reconnect the drive
  8. Boot into BIOS and accept the changes (new 32GB drive)
  9. Boot into the SeaTools and restore the drives capacity
  10. Boot into Linux :)
The BIOS still thinks it has a 6GB drive connected but its happy with it. Also the Linux boots up faster as BIOS is not complaining about seek errors when addressing the drive.

Running hdparm  -t should result in 13...25MB/sec - not the best speed but a lot better than 500KB/sec. You may ask what the range is so big - but I don't have an answer. On some systems the drive runs with udma2 mode on others in mdma2 mode. Forcing the mdma2 mode to go to udma2 mode did not seem to have the desired performance change but still its working well enough.

Many may ask "why would you ever want to resurrect an old xSeries 220", but the answer is simple. The system has been in use for a long time and if it has not died so far it wont in the near future. Also the speed is boosted with built in SCSI drives.

So where to use the old xSeries 220? Well, for test servers, for low priority web/sql servers, dns servers - there is no good answer. It all depends on the needs of the system. One place would be central log server - aka other servers log to this host, the syslog would save the logs in real time to the SCSI drives and nightly you rotate the logs to go into the mysql database on the IDE drive. Why do this? Well, logs are here for one purpose to be analyzed afterwards.

Another good example would be to use it as a file server - share out the data on SCI to Linux hosts and from IDE to windows hosts and if they start to complain that the speeds are soooo slooowwww blame it on windows and propagate the move to Linux :P

What can I say - use Your imagination.
Tested on CentOS 4.5 - but should work on any Linux with 2.6 kernel.

Blogged with Flock

Tuesday, October 9, 2007

Linux Bioapi on Fedora 7...

I have been using bioapi and pam_bioapi on my T43p ThinkPad under SuSE 10 Pro for the past 2 years and been happy with it. No annoying password prompts to type your passwords and well typing your password to the screen saver prompt... I wont go into that...

As always all good things have to end sometime, so went my bioapi when I replaced my SuSE with Fedora 7. The reason to switch to Fedora was forced upon me by the all famous evil MicroSoft with its all famous MSN version upgrade. As I hacked the Kopete as much I could the old libs in SuSE were not just up to the task of plain rebuild to a newer version.

After backing up the usuals about a week ago I reinstalled my laptop. No big deal. Around half an hour of installation, another half a hour to strip all of the gnome components and yum junk and another 2 hours of installing and configuration of KDE. All was well untill I remembered the most important thing - I didn't backup my own rpm's. And this included the bioapi and the pam module. After about 20 times entering the password for screen saver the old finger swiping reflex started to come back. Crap...

Oh well - I had to start from the beginning.

The first thing to do is the bioapi lib's. Everything is straight forward until I started packaging it. Google was no help - the best thing google could offer was the "--with-Qt-dir=no" configure option, but that was not acceptable for me. I needed the Qt for my screen saver so after little debugging I found the evil source of the Qt compile errors. It is an rpm macro

# here is a hack
# in /usr/lib/rpm/rpmrc line 18
# replace optflags: i386 -O2 -g -march=i386 -mcpu=i686
# with optflags: i386 -O2 -g -march=i386

Every time when compiling the configure test part spitted out error messages about mcpu being deprecated and the test failed.
As I'm low on time for the time being I just used the hack and Bingo - every test was passed successfully.
As this is literally an hack I wont be using it for other packages until needed again, but basically its biapi's testing problem and currently I don't have time to start fixing it. The i386 rpm is here and the source rpm is here. One day I will move the udev rules the the bioapi package, but not right now.

Ok, so everything should be set to run the bioapi right? Well, not really - the one thing that I had totally forgotten was the UPEK inc. tfmss bsp api. This was the most frustrating thing as I had forgotten about it totally. As it is distributed under property license and it seems everyone have drawn back their binary and source packages I'll just publish the spec to build the rpm and the config. It seems that UPEK is forcing their license, so here is the spec and here is the configuration fail. A simple rpmbuild will build the package that can be installed. For licensing read the SDK EULA, and thus I'm not responsible for whatever you do after building the package - this information is provided AS IS, without any warranty by the packager.

The final thing is the pam_bioapi package. This packaged nicely without any hiccups. But then the problems started. It seems that the new version of the module is only capable of using SQLlite3 back end and not the old flat bir file based back end (atl east I dint find an option to enable it). Thus I'm still without a working bioapi. Oh well - back to the old reliable 0.2.1 it went. After little patching it built nicely. But the problems did not end - I started to get bioapi errors. "BioAPI error #194d" - google did not give any good answer to it except it is a permissions error. After some debugging I found out that the error is cosed by wrong permissions on the usb fingerprint reader device. By default it is owned by root and writable by user root - naturally I'm not a root on everyday tasks so I did some magic and got it running. To make the changes permanent I wrote a little udev rule that makes the device to be owned by group "fingerprint" and made it writable by the group so it started to work nicely. Here is the rpm and here is the source rpm.

I also included a mini "howto" on how to get it to work fast and easy. I also may be missing some libs and deps from the packages as my systems are always bloated with libs and development headers, but the examples are here, so have fun everybody.

Some useful links allso: udev rule writing, file permissions in linux, a good howto on enabling fingerprint reader, the bioapi error I mentioned before, another error symptom, Novell way of doing things, the TFMSS fingerprint reader api.

Thats about it for today - have fun and play nice :)
PS! Great thanks the the people in UPEK for releasing the Linux port for the fingerprint reader api :)

Blogged with Flock