Google Search

Google

Sunday, November 25, 2007

How to fix a simple Fedora 7 to Fedora 8 mess...

when doing a dist-upgrade from Fedora 7 to Fedora 8 I accidentally did a "init 3" - this is not recommended as the dkms installer runs and hangs the system as the rpm database gets locked.

The main thing is not to panic and send the machine to boot as it may not boot up and create even bigger mess...

The solution to this is simple:
first kill of any instance of rpm and rpmq
do and rm -rf /var/lib/rpm/__db.*
and rpm --rebuilddb

You can test if it worked by doing apt-get update and if it works - it will complain about allot of installed packages having duplicates but thats not really a problem (if you crashed at the beginning).

In my case it left duplicates that are obsoleted with the new release so I had to do "apt-get -f install" to fix the immediate dependence problems before I can continue with the dist-upgrade.

some could be frightened by this message
You are about to do something potentially harmful
To continue type in the phrase 'Yes, do as I say!'

well it is the reality - it is potentially dangerous to remove them so it is wise the check if the packages that will be removed have duplicates installed already (hint - removing glibc* will trash your system if you don't have a new version installed).

If you think everything is fine, well guess again - the first thing you need to do is to install new apt with its deps, do an "apt-get update" and then "apt-get upgrade". Dist upgrade is not possible as it most likely there are broke deps and file conflicts at the system right now so most like it will fail at the the end.

In my case it was at around 800 packages so I have around 800 duplicates - ARRGH. Removing them manually is possibly the wisest thing as some packages do not contain fc6/fc7/fc8 tags so you don't have a reference point what package is newer. All so some Fedora 8 packages come without the fc8 tag, so go figure...

The dist upgrade part will be next after the system gets "fixed".

Here are few scripts that "can" help:
rpm -qa | grep fc7 > /tmp/fc7packages
rpm -qa | grep fc6 >> /tmp/fc7packages

for i in `cat /tmp/fc7packages | sort | sed -e "s/-[0-9].*//" `; do echo "$i check-------------------------------------------------------"; rpm -q $i; done > /tmp/checklist


This is not an fix for a beginner, but a guide on how I did it, so use it at your own risk!

Blogged with Flock

Thursday, October 25, 2007

Central logserver - vol 1 - the basics...


In a normal home environment no one uses a log server as it does not have any practical usage - or does it?

On a enterprise environment it is natural that administrator(s) once in a while look through the servers once in a while (or at least when something crashes they give a quick checkup). In big enterprises it is natural that there is a central log host that collects all the logs into one place and a team of security professionals that audit the logs. The good side of having a log server is that you can correlate events over an extended pattern - for example you have smart hacker that has time to crack a root password. He will never run a open brute force attack on your system openly. He will probe your host maybe once a day from one IP address a day, but he will most likely have other hosts also so basically there can be hundreds if not thousands of attempts per day from different hosts at different times. In a singe day you may get around one or more attempts from single IP. The events will be at different times so it would look natural that some user just "accidentally" entered a wrong IP or a host name. If this happens frequently enough the logs get big enough for rotation and most likely the will rotate daily. So in 1 days time frame nothing suspicious has not happened and it is not likely that the administrators will look into the rotated logs (if they can - by default only 4 rotations are kept). This will provide an attacker the perfect cover and all he has to do is wait and hope that one of the passwords works.

Another type of attack that has become used lately is smurffed brute forcing. In this scenario a company has several servers visible to the outside world and each server or a group of servers are administered by different administrators. In this case the attacker just tries few times a day from a single IP against each server and if the communication between administrators is not working perfectly and one of the administrators is not paranoid enough to start poking in the logs it will also be unnoticed. Once one servers administrator account is compromised, installing a root kit and getting into the other servers is peace of cake.

Now, if the enterprise has set up a central log server (or a server cluster) all logs are in once place for auditing. Allso extra security is provided as the log server can be firewalled except for the logging ports so hacking the log server is harder and if a host stops login is a perfect reason to audit the server. There are several ways to achieve perfection:
log all messages into daily logs - the down side is that everything is scrambled and finding a pattern from the mess is hard
separate logs by host - better but still not perfect, as logs are still a bit of a mess and in long term grow big and slow
separate logs by host and facility - the logs will have good quality, but again the logs will grow too big and at some time have to be dumped
separate logs by host, year, month, day and facility - perfect for small to medium enterprises as you have logs that go back long time, are properly sorted and separated, new logs are created when day/month/year changes so basically it gives a good overview of the hosts status at given date also searching for a pattern from a log can be extended over several hosts
separate logs by host, year, month, day, facility and provide extra daily dumps of data from all hosts and all facilities - this method will give an extra feature not available in previous methods - the dumps can be imported into a SQL database to speed up searching and pattern matching, witch also allows the use of running previously created patterns on the logs. The drawbacks are that hard drive space is wasted on duplicates (yes keeping the duplicates is useful as one file is used for SQL dumps and other for real time auditing) also the analysis of logs can be run on dumps from yesterdays data. The storage issues can be overcome if the dumps are imported into a temporary database and afterwards moved into proper database from where there are backups also.
separate logs by host, year, month, day, facility, daily full dumps and real time database logging - this method is the mother of all. It is the most complex and has the most features available. All logs are saved into files because if the database connection is dropped nothing is lost. All logs are pushed into database where almost real time analysis can be performed with complex patterns. The daily logs can be run trough real time event correlation application that searches predefined patterns from logs and generates events on based the rules.

Witch is good for some is dependent on the needs of the company, but all work well and can are dependent on the creativity of the system administrator.

There is one dependence that the central log server needs - all the clocks have to be in sync down to a second and they have to be in the same timezone, as logs with different time are useless. One server reports that a hack happened 01:00 the other server reports that the hacked server started to hack it 06:02. Naturally it raises a question - what happened between 01:00 and 06:02? What has been done in the first server that took 5 hours? In reality the server 1 got hacked 01:00 and the hacker took only 2 minutes to root kit the server and start hacking the server 2, but with so big time difference its hard to get a real view of the events that happened.

So You have read so far and you ask - why am I blabbing about central log server in enterprise environments when I started to talk about home systems? The answer is simple - I have a central log server implemented at home also and I did some statistics today.

As I'm not a regular user I have my systems secured pretty well, but logs tell their own story. Those statistics is created purely on basis of ssh based attacks and no other services:
2004 - 29 attempts total
2005 - 71 attempts total
2006 - 194 attempts total
2007-10-24- 3794 attempts total

All that the statistics show are distributed attacks from different hosts and I don't even want to estimate what will happen next year if the attack curve stays the same. Central logging is not a magic cure against hacking but it can give powerful tools to help diagnose a lot of problems.

Blogged with Flock

Sunday, October 14, 2007

Problems with bluez and bluetooth headset...

For some time I have been tinkering with my Jabra BT500v and the bluez bluetooth stack. Under SuSE I had only one way of getting it work - bluetooth-alsa with all the extra packages to get it to work (sbc, plugz, btsco, the snd-bt-sco kernel module, and the HCI SCO patch for the kernel). All was simple/(err - possible) and kinda worked (patching the kernel was a pain tho). The bluez was really old (2.19 if I remember correctly) so there were no choices really.

When I moved to Fedora 7 I had everything set to rebuild everything necessary to get it running again. As Fedora 7 should have a new BT stack I would not have to do all the things again as starting from 3.16 you don't need them any more. To my surprise I discovered that Fedora 7 had only 3.9 version of bluez. Not a problem - rebuild with the latest sources - 3.20 is the latest so should be the best so it is a prime candidate.

At the beginning I thought everything was fine (well, after tinkering with the spec file a little). Well I got the amarok to play, I got vlc to play, I got the mplayer to play but what I did not get to play is Skype. The thing I was trying to use the headset with. The thing to what the headset was meant to be used for.

Another blow to my ego was today when I updated my system today and got a kdebluetooth update (1.0 beta 8). After a boot it could not even start - just plain sigserv at the login. Right. The first thing that comes to my mind is that kdebluetooth is liked against an older version of the api that is been redesigned or removed from new version. Tho the interesting part is that the older version worked fine with new bluez, so I'm a bit confused at the moment. Anyway I went back to the 3.9 bluez until I figure out which is the reason for the crashes.

For the moment I have 43 self built/patched rpm's running on my system and no way to track them expect the gray matter called the brain. With every rpm I customize/upgrade for my needs grows the need to track the mainstream for bug fixes and security fixes. With this hack I tried to ease my life by replacing the tracking from 6 to 2. But it seems Ill have to do it the old way or wait for Fedora 8 to come out to make my life little easier. Then again if I can't get Skype running with new bluez I have to go back again. Anyway I'm back in the point where I started.

Anyway hers some links that can help  (or make life more complicated): here, here, here, here, here, here, here, here, here and here.

If I figure out how to how to get it running Ill add another post.

Blogged with Flock

Thursday, October 11, 2007

Resurrecting old hardware...

Hardware is getting faster and faster, so people upgrade as the needs get bigger and bigger. What about the old hardware that stays behind? Some of it gets reused for low priority servers but most of them get thrown away as obsolete. The worst case is if the hardware itself has become obsolete aka its maxed out when it was current.

Well I happened to be involved in one of this type of resurrections. The server was an IBM xSeries 220 tower with 72GB hardware raid 1 array and a small 40GB IDE drive. The idea was to upgrade the small IDE to a 160GB drive to get more storage area. As the 220 has show insane stability the idea was well at the beginning, well until the new drive was installed and well, the Linux recognized it. But here is where the problems started. The performance was insane - the drive would max out at 500KB/s with the read timings test (hdparm -t /dev/hda). The other symptom was drive seek errors which would indicate that there were bad blocks at the beginning or supernodes which would mean that the drive was dieing.

As the Maxtor's drive was brand new (just out of the anti static bag) it could not be the case. After a test with a second 160GB drive the results were the same. Logically thinking - this means we have hit the LBA32 127GB limit. The BIOS cant address more address space than 127GB and when you do you hit the brick wall of buffer overflows in the address registers (most likely when you write over 127GB you start from the 0x0 h again and write up the drives beginning). Of course the Linux kernel takes over the drive addressing after you boot up, but for that to work well the BIOS has to be satisfied first. Well copying 127GB at the rate of 500KB/s would take 266 338 seconds - thats around 3 days straight - I didn't have that much time to wait to see what would happen after the limit gets exceeded.

So what to do when your hardware is too obsolete for upgrading to your needs? You hack it!

In my case it was simple as Maxtor has firmware utility called SeaTools that can change the drive capacity in the firmware. Why is that good you ask? Well this way we can fake the hardwares without using the soldering iron or assembler to hack the firmware in the device.

The procedure goes like this:
  1. install the new HDD
  2. boot into BIOS and accept the changes (new HDD 6GB?!?!?)
  3. boot into the SeaTools utility (CD or floppy)
  4. Change the drives capacity to be 32GB
  5. Shutdown the machine and remove the drive
  6. Boot into the BIOS and accept the changes (the IDE drive has gone away)
  7. Shutdown the machine and reconnect the drive
  8. Boot into BIOS and accept the changes (new 32GB drive)
  9. Boot into the SeaTools and restore the drives capacity
  10. Boot into Linux :)
The BIOS still thinks it has a 6GB drive connected but its happy with it. Also the Linux boots up faster as BIOS is not complaining about seek errors when addressing the drive.

Running hdparm  -t should result in 13...25MB/sec - not the best speed but a lot better than 500KB/sec. You may ask what the range is so big - but I don't have an answer. On some systems the drive runs with udma2 mode on others in mdma2 mode. Forcing the mdma2 mode to go to udma2 mode did not seem to have the desired performance change but still its working well enough.

Many may ask "why would you ever want to resurrect an old xSeries 220", but the answer is simple. The system has been in use for a long time and if it has not died so far it wont in the near future. Also the speed is boosted with built in SCSI drives.

So where to use the old xSeries 220? Well, for test servers, for low priority web/sql servers, dns servers - there is no good answer. It all depends on the needs of the system. One place would be central log server - aka other servers log to this host, the syslog would save the logs in real time to the SCSI drives and nightly you rotate the logs to go into the mysql database on the IDE drive. Why do this? Well, logs are here for one purpose to be analyzed afterwards.

Another good example would be to use it as a file server - share out the data on SCI to Linux hosts and from IDE to windows hosts and if they start to complain that the speeds are soooo slooowwww blame it on windows and propagate the move to Linux :P

What can I say - use Your imagination.
Tested on CentOS 4.5 - but should work on any Linux with 2.6 kernel.

Blogged with Flock

Tuesday, October 9, 2007

Linux Bioapi on Fedora 7...

I have been using bioapi and pam_bioapi on my T43p ThinkPad under SuSE 10 Pro for the past 2 years and been happy with it. No annoying password prompts to type your passwords and well typing your password to the screen saver prompt... I wont go into that...

As always all good things have to end sometime, so went my bioapi when I replaced my SuSE with Fedora 7. The reason to switch to Fedora was forced upon me by the all famous evil MicroSoft with its all famous MSN version upgrade. As I hacked the Kopete as much I could the old libs in SuSE were not just up to the task of plain rebuild to a newer version.

After backing up the usuals about a week ago I reinstalled my laptop. No big deal. Around half an hour of installation, another half a hour to strip all of the gnome components and yum junk and another 2 hours of installing and configuration of KDE. All was well untill I remembered the most important thing - I didn't backup my own rpm's. And this included the bioapi and the pam module. After about 20 times entering the password for screen saver the old finger swiping reflex started to come back. Crap...

Oh well - I had to start from the beginning.

The first thing to do is the bioapi lib's. Everything is straight forward until I started packaging it. Google was no help - the best thing google could offer was the "--with-Qt-dir=no" configure option, but that was not acceptable for me. I needed the Qt for my screen saver so after little debugging I found the evil source of the Qt compile errors. It is an rpm macro

# here is a hack
# in /usr/lib/rpm/rpmrc line 18
# replace optflags: i386 -O2 -g -march=i386 -mcpu=i686
# with optflags: i386 -O2 -g -march=i386

Every time when compiling the configure test part spitted out error messages about mcpu being deprecated and the test failed.
As I'm low on time for the time being I just used the hack and Bingo - every test was passed successfully.
As this is literally an hack I wont be using it for other packages until needed again, but basically its biapi's testing problem and currently I don't have time to start fixing it. The i386 rpm is here and the source rpm is here. One day I will move the udev rules the the bioapi package, but not right now.

Ok, so everything should be set to run the bioapi right? Well, not really - the one thing that I had totally forgotten was the UPEK inc. tfmss bsp api. This was the most frustrating thing as I had forgotten about it totally. As it is distributed under property license and it seems everyone have drawn back their binary and source packages I'll just publish the spec to build the rpm and the config. It seems that UPEK is forcing their license, so here is the spec and here is the configuration fail. A simple rpmbuild will build the package that can be installed. For licensing read the SDK EULA, and thus I'm not responsible for whatever you do after building the package - this information is provided AS IS, without any warranty by the packager.

The final thing is the pam_bioapi package. This packaged nicely without any hiccups. But then the problems started. It seems that the new version of the module is only capable of using SQLlite3 back end and not the old flat bir file based back end (atl east I dint find an option to enable it). Thus I'm still without a working bioapi. Oh well - back to the old reliable 0.2.1 it went. After little patching it built nicely. But the problems did not end - I started to get bioapi errors. "BioAPI error #194d" - google did not give any good answer to it except it is a permissions error. After some debugging I found out that the error is cosed by wrong permissions on the usb fingerprint reader device. By default it is owned by root and writable by user root - naturally I'm not a root on everyday tasks so I did some magic and got it running. To make the changes permanent I wrote a little udev rule that makes the device to be owned by group "fingerprint" and made it writable by the group so it started to work nicely. Here is the rpm and here is the source rpm.

I also included a mini "howto" on how to get it to work fast and easy. I also may be missing some libs and deps from the packages as my systems are always bloated with libs and development headers, but the examples are here, so have fun everybody.

Some useful links allso: udev rule writing, file permissions in linux, a good howto on enabling fingerprint reader, the bioapi error I mentioned before, another error symptom, Novell way of doing things, the TFMSS fingerprint reader api.

Thats about it for today - have fun and play nice :)
PS! Great thanks the the people in UPEK for releasing the Linux port for the fingerprint reader api :)

Blogged with Flock

Wednesday, August 8, 2007

Uplink - The way of the hacker...

Ok, so this is a bit of off topic, but I just could not leave it unmentioned.

Everyone is talking that Linux is crap just because there are no games that run natively under Linux and I'm not talking about the GPL-d games and the community "hacked" games. All the big game companies are wining about the lack of standard game development api-s and libraries, but they exist and they just don't have the coders to port the games to Linux as they are only built on top of the windows DirectX api-s and are not written in consideration for other rendering api-s. So its simpler to blame the Linux for lacking api-s than admit that it would take around 3 years to totally rewrite an existing game to Linux.

Well, I'm glad that one small firm has written a game that runs natively under Linux. The game is called Uplink and the company is Introversion.

This game has existed for a while (ummm 4+ years) and as I have not played games for a while I found that it had a native Linux version built also :)

Ok, the game is not really big and fancy for its graphics, but starting from version 1.3 it looks like its windows counterpart. As I'm found of my 1600x1200 resolution the 1.3 version is the milestone that proves that it is doable. Maybe it is not as simple to make a 3d FPS than a 2d strategy/RPG but it just show that the potential is there and that the big companies should stop hiding behind the "lack of api-s" and start thinking of the market share the Linux games will be taking in the near future. Especially when the "Completely Fair Queue" has been implemented in the Linux kernel starting from 2.6.23.

As a side note - an old goddy also exist "Railroad Tycoon" run through OpenTT works also perfectly :)

PS!: I love my vacation so far :)

technorati tags:, ,

Blogged with Flock

Thursday, July 19, 2007

Adobe Reader and Fedora 7...

Like always, nothing is without problems - this time is not an exception.

So as everybody has heard the Linux AdobeReader_enu-7.0.8-1 has some holes in it and the 7.0.9-1 is available. So today I thought that it would be a good idea to upgrade my old version to the new version.

Lets just say it did not go as expected....

Its good that Adobe makes an effort to provide the reader in rpm format, but this time I ran into a small problem. It seems Fedoras libgtk-x11 version has gotten a bit too new version. The "rpm -Uvh" went fine, nut when I tried to launch the reader form the start menu nothing happened. After few tries the reader didn't start - no errors nor warning were displayed, so the easiest thing to do was to open my favorite tool the console :)

When starting the "acroread" script from the command line it starts spitting out "expr: syntax error" errors. This usually means that the evaluated expression is faulty ore some values have not been assigned. Right it just couldn't go without the pain of debugging...

Oh well, fairing up the vi and opening the "/usr/local/Adobe/Acrobat7.0/bin/acroread" showed too much info to start with. First step was to debug the script to see where it crashed. Just by replacing the first line "#/bin/sh" with a bit more intelligent shell with debugging output "#/bin/bash -x" showed me the problem location.

It looks like the "base_version" variable will get an empty value witch on the other hand makes the loop to go wild. So to fix it we have to look at the "get_gtk_file_ver()" function.

And here it is - on line 418 - "echo $mfile| sed 's/libgtk-x11-\([0-9*\).0.so.0.\([0-9]\)00.\([0-9]*\)\|\(.*\)/\1\2\3/g'". This regex expects to find the version <= 900 but in fedora it is 1000 so we need to mod it to look like this "echo $mfile| sed 's/libgtk-x11-\([0-9*\).0.so.0.\([0-9]\)000.\([0-9]*\)\|\(.*\)/\1\2\3/g'". This one extra zero will make it run nicely and this method is applicable on one machine at a time, but when going full scale the regex should be modified to look for 2 OR 3 zeros, so lets hope that Adobe fixes it soon.

This "bug" also hunts the CentOS 5 and RHEL 5 and maybe some other newer distros.

So hope for the best and a quick fix from Adobe :)

technorati tags:, , , , , , ,

Blogged with Flock

Saturday, July 7, 2007

LVM volume management made easy...

As a enterprise architect/infrastructure developer I'm faced with allot of problems that seem trivial on small scale but when you need to manage thousands of workstations or servers become critical problems. One of them is hard drive partitioning.

For security and stability reasons it is wise to split your root file system to several peaces - my default is:

/boot
/
/tmp
/var
/var/log
/home
swap

This may seem like crazy layout, but when you think of it, most problems start with servers getting root kited by some trivial exploit that will be used to trigger a buffer overflow and there for get a shell. If the tmp, var, var/log and home are mounted as noexec even when the exploit manages to download the overflow script - it wont be allowed to execute. This means its 1 step closer to avoid a security breach.

As I said before - it looks like a trivial task - install the server properly and thats it. Well this advice is good for up to 25..50 machines per admin, but it will fail miserably when you are running hundreds or thousands of machines. You may succeed to install them but soon after you are faced with a problem that some partitions need to be resized because one is almost empty and the other is constantly filling up.

If its just a pc you can take it offline and reinstall/boot to rescue and resize them. But when you are running mission critical servers its not an option. The same gos when you are running systems with different hardware configurations and there fore with different disk sizes. Here comes in the Logical Volume Manager (LVM). It creates another layer of abstraction on top of the physical layer therefore giving you another layer of dynamic configuration.

The greatest advantage of using LVM is that it you can create/resize/delete volumes on the fly without needing to reboot the server or boot to rescue to manage it. Sure it is not as simple as it sounds but basically it works wonders. It also gives the possibility to build the LVM volume on top of the hard drive or a raid device on full extent leaving you (in ideal case) with only 1 partition on the device (I use 2 - I like to keep /boot separately).

So in my case it is not wise make "full" partitioning of the drive, but to keep the partition sizes to minimum (that means with 10% overhead) leaving the rest of the drive free. This will allow me the simply "add" space to the volume that needs space without needing to shrink another volume (this volume cant be mounted while shrinking so this will need rescue mode!!!). Also keeping operating system on one volume group and data on another can save allot of work when restoring a system. For this I use a software designed by me called FabricManager - looks like I have to blog about in the future...

This works well until you run into a problem - You have 1000 workstations installed with this configuration and you want to do a dist upgrade. All is goo until you run out of space in the root volume to do it. 80% of the disk is filled with downloaded rpm's and there is no space left to install them. With LVM it is no problem, You just add some space to the root volume and all works out well, but here comes the catch - how long will it take to manually resize the root volume in 1000 workstations? I'm guessing around a month or two. And this is the best case scenario. Assuming that a good security policy is in place and none of the workstations have the same root password the time could be extended to few years which defies the whole point of upgrading. Sure if all the workstations have been installed with the same root password you can use an script to run the command automatically, but then You will run into about 10% of computers that wont accept the password and the 10..20% of PCs that are turned off at the time you run it. What about strict security policy that wont allow root to even log in remotely? What if the workstations are in different locations or countries or even continents? You can't just run the script few times and hope for the best as some workstations get several times the space needed and some get none.

And so a trivial problem grew out of proportions in seconds and one small design flaw can cost ALLOT of time trying to fix all the problems separately. this led me to write a small shell script that would be able to create/extend logical volumes as needed and it can be found here. It is not 100% what I need but the core functionality is there and it works.

This also solves problems with few "special" applications that need their own partition to run and as I try towards simplicity I can now package the script into the autoupdate package (right... haven't mentioned it before also... need to fix it in near future...) and set a dependence for it. So to automate the install of the application I just add one row in the %pre section of the package and woala - new volume created in the install time :D

It also helps to resize partitions by simply updating the base package of my configuration that contains the sizes of the required volumes. this gives me the ability to reconfigure workstations in few days to new configuration and life can go on...

For conclusion - security is important, so no cutbacks should be made there, noexec and nodev are your friends - use them wisely, LVM is a great tool but it is not a magic bullet - adding an extra abstraction layer multiples the complexity of the system so use it carefully and when developing large scale infrastructure multiply the the count of computers with 1000 and think if the solution is easily managed then.

This script was built for CentOS 4.4 and tested on it - it will work on other distros but may need some modifications.

technorati tags:, , , , , , , ,

Blogged with Flock

Friday, May 25, 2007

Centos 4.4 and 4.5 apt problems...

Ok, so the first question would be "Why the f*** do you use apt for CentOS?!?!". Well the answer is simple - yum is too intelligent and is too big of a blabber, so its not useful for automation. Thats why apt is preferred - its dumb as a door knob, it works fast and it does not try to do things fully automatically (like up2date).

up2date is also an alternative that could be used but wen you one day discover that the uber expensive and uber busy system has crashed and its the fault of up2date upgrading the system automatically, well... no thanks...

There is a file conflict between the centos-release and apt - one of them has got too smart. In 4.3 the centos-release provided just basic necessities, but now also includes both yum and apt repository lists. And as apt has not been upgraded for a while, there is a file conflict at hand.



The bug has been reported and assigned in the centos bug tracker, but it looks like the fix wont come out anytime soon.

So here is my quick fix - rebuild the apt rpm. In the apt spec file line 122 is:

%{__install} -p -m0644 centos.list %{buildroot}%{_sysconfdir}/apt/sources.list.d/

remove or comment out this line and run:

rpmbuild -ba apt.spec

To make the build succeed I also had to do this "hack" on line 140:

%doc %{_mandir}/man?/* -> %doc %{_mandir}/man*

This will produce a new rpm that can be manually upgraded with rpm and the upgrade to centos 4.5 can continue. This will also give an opportunity that when the conflict is finally resolved the new apt will get upgraded or the old one can be reinstalled depends on who gives up the file...

Also a recommendation for the apt maintainer would be to include a buildreq. for "bzip2-devel" and "libselinux-devel"

Update:

As I was asked for my version of the apt.spec I uploaded it here.

technorati tags:, , , , , , ,

Blogged with Flock

Thursday, May 3, 2007

Horror of menu editing...

Have You ever had the need to edit your application menu or create new links to it? Click... Click... Drag... Drop.. Save! Thats how you would do it in KDE kiosk mode. As a result you get a custom menu file that represents the applications menu. Now in Gnome it isn't that easy - you need to create the .desktop file manually with your favorite menu editor (as gnome does not have a kiosk mode nor a menu editor). Ok so far so good - You have just created a new menu application link with that nice custom menu icon and the nice parameters that will run the app just nice.

How about when you want to create an sub menu or two? So you have already created the .desktop files with the right categories - lets assume we need to create sub menu under applications "SubA" and under that a sub menu "SubB". So the .desktop links have categories "Applications;SubA;SubB". Simple eh? Well not really. You also need to create the .directory entries. What they are useful - I don't know - possibly to just add a comment to the menu, or maybe to just add the icon to the folder. Nop! They contain the category tag. All that hassle to just get a category tag?!?!?

So You have the .desktop and the .directory files in place, but the menu is still not showing the right subtree and the icons come under "Other" menu. So whats missing? The menu file! Who would have guessed that you need to create another file JUST to get the sub menus! Ok, you create the xml menu file structure consisting of applications->SubA->SubB menu directories. so now you have 4 files: 1 for the application, 2 for the directories and 1 for the menu. With anticipation you expect to have a new sub menu structure created and with even bigger bitterness You will discover that the menu does still not show up!

The missing link is the inclusion of the menu into the APPLICATIONS.MENU?!?!?!?!? this is just crazy - why can I just create a menu file and it would be included into the menu structure automatically??? Well it seems there is "start-here.menu" witch includes the rest of the menus. So to finally get the menu structure you want you edit the applications.menu file and add to the end this line "<MergeFile>my-custom.menu</MergeFile>". With the greatest anticipation you log out, for insurance kill the X server (I recommend this as several times the gconfd daemon did not die after just plain old log out so ->Ctrl+Backspace), log back in and with anticipation open the applications menu. Woala - our SubA and SubB menus are there. Now the horror is still not over - we need to make a script that checks if our menu is included in the menu on entering run level 5 or on boot. The simplest way was not with some xml parser but a simple perl script that would read the menu file into array, join the array into an long string and with regular expression look if the string contains the menu file, if not do a little preg_replace known merge file and add our line. Woala - now the process is complete. Btw - in KDE there is a merge directory from where you can automatically merge menu files into the tree...

So if You have red so far You might ask why the hell I'm going into all this trouble with the menu's and files - well how else you can put the files into an RPM. Again You might ask why would someone need to do this ever? And again I would answer - to deploy this menu structure on around 1000 workstations. RedHat and Novell enterprise linuxes are excentric on gnome, so I don't understand why??? They promote gnome because gnome is absolutely minimalistic, so the end user cant mess things up with "accidental" configuration. I use KDE on all my computers and I have never run into problems running and packaging KDE in kiosk mode.

So here are some useful info on menu editing and structures: basic, example, .menu file structure, .desktop files, .directory files and menu file structure.

Here are some suggestions on how to improve the "functionality" of the menu editing from the enterprise level:

1) as gnome seems to be all in for the xml - make handlers for xml based desktop and directory links - it isn't as fast as stream parsing a file, but still computers are fast enough so they can be parsed once and put into the memory.

2) drop the stupid menu hierarchy with all the inclusions and stuff. The start-here.menu can be there but it should contain only folder links to folders that contain the desktop files (remember - they are in xml so they can contain their own menu structure!!!). This would obsolete the need the create 5 files to just create a link into sub menu of a sub menu - I know its against the Gnome philosophy to keep everything minimalistic, but from the enterprise point of view it would be minimalistic. We could just point the start-here menu to look for the menu structure from another location and woala - perfection!

3) if the 1. suggestion is too much work, the menu file in it self could be put together from the xml files - merging xml files is simpler than building 1 stream parses for desktop files and another for directory files. This way we can join the directory entries and the desktop links into one folder so managing them would be easier.

Too long... Too boring... Maybe useful...

technorati tags:, , , , , , , , , , ,

Blogged with Flock

Wednesday, April 25, 2007

Lightning Mutiweek fix in 60 seconds...

As I really cant live without it I decided to hack it. Here is my little hack step by step as it may be useful to someone until it gets fixed by the author. As I use Linux the hack is made under SuSE 10.0 Pro.

Not to create useless stuff in the home directory I recommend using the /tmp directory:

vahur@too:~> cd /tmp/
vahur@too:/tmp> mkdir lm
vahur@too:/tmp> cd lm
vahur@too:/tmp/lm> wget --no-check-certificate https://addons.mozilla.org/en-US/thunderbird/downloads/file/12248/lightning_multiweek_view-0.0.2-tb.xpi
vahur@too:/tmp/lm> unzip -x lightning_multiweek_view-0.0.2-tb.xpi
vahur@too:/tmp/lm> dos2unix install.rdf
vahur@too:/tmp/lm> cat install.rdf |sed -e "s#>2.0b1<#>2.0.0.*<#" > install.rdf
vahur@too:/tmp/lm> zip lm.xpi chrome/ltnmw.jar chrome.manifest install.rdf

Here is the long version: I create a folder called "lm" in the tmp directory, download the original xpi from the mozilla site with wget, extract all the files with unzip, as the file was created under windows we need to convert it to unix before editing (lowers the risks of random character interfering), replace the old version with new one with stream editor sed and the last thing is to zip it back up :)

Thats it - just install it as a normal extension in Thunderbird. Works 95% - may mess up preferences menu a litle- use at your own risk.

I wish all my work would be that easy...

technorati tags:, , , , ,

Blogged with Flock

Tuesday, April 24, 2007

x86 on a POWER...

It seems like IBM has few tricks in their sleeves. Normally there would not be an efficient way to run Linux on their mainframes thanks to their fully hot-swap architecture. Fried a CPU or a RAM? Maybe even a mainboard? No problem - they are hot swappable :)

More info on the architecture here. Normal small to midrange enterprises usually don't need them as they can deploy grid architecture, but there is wide range of services that need to maintain around 99.9+% of uptime with high loads. Here comes the System p to fix most of the problems. As grids are powerful, they have some shortcomings. They don't provide enough power for high end calculations as they usually are linked with up to 1gbit interfaces - the system P is on the other hand interlinked with maximum theoretical bandwidth of 30gbit or more interlinks. It also provides the hot swap for all components (well almost all) - I haven't looked into System p design for a long time, but the only part that was not hot swap was the interconnect plate around 3 years ago. Usually these system sun without problems (after they have been set up properly) 6+ years and when the crash happens its total as people who set the system up most likely have moved up the food chain and got a better job or just wont remember how to set them up. Here is where the grid is more fault tolerant - you can loose some of the nodes without any impact on overall performance (and I don't mean "2 pc clusters"). Grid hardware is also cheaper (usually normal PC's or racked server or blades) interconnected with ether 1gbit full duplex single mode fiber, 10gbit multi mode dark fiber, or when the requirements are high enough a profession interconnect.

Anyone wanna donate an IBM System p5 595 with 64x2.3GHZ CPU, 2TB RAM and a 28.1TB storage node for hmm... testing purposes? :)


technorati tags:, , , , , , , ,

Blogged with Flock

Friday, April 20, 2007

CentOS 5.0 is out...

The long waited release has happened - CentOS 5.0 is officially out!

The good news is that the base packages are recompiled and are out - extras/addons repos are still empty. the kernel is fairly new 2.6.18 this means the OS has SAS support - yehaa :) Since most IBM 366 are running low on supply, the new IBM 3550 are in (well, it still needs testing a bit, but it at least boots up properly) :) This is a huge step forward but there is still a long way to go. This might give some idea on how long way...


Out of curiosity I tried to upgrade one of my 4.4 servers to 5.0, but it did not go well. After removing around 40 packages I finally got yum to upgrade the kernel and the rest of the system. Well as a result I found out that the upgrade process died in the middle of the process. Rpm had removed its libs and tried to install new ones - it crashed with unresolved deps. Last test was to boot the machine, and it didn't go well also - it crashed on a attempt to load LSI Fusion MPT driver :( As it was a test the results are not promising. Also the lack of apt and apt support for repos (I really really hate yum and up2date). Yum is too talkative and generates several race conditions wen doing the upgrade, so it lacks the needed silence to do scripting!

The conclusion - It works, looks better than 4.4, is more resource hungry (at least 512MB to do a graphical install), lack of support for apt and empty extras-, conttrib-, plus repos, boots faster (didn't take time with stop watch, but it looked faster at least). This means its to crude for enterprise wide adoption, but it shows great promise, next tests will probably be done in few months to see if it has improved.

PS! if any CentOS developers see this post, PLEAS PLEAS PLEAS port apt to CentOS 5, yum just does not cut it and alternatives are always good to have.

technorati tags:, , , , , , , ,

Blogged with Flock

Friday, April 13, 2007

Security vs tech support...

Each system architect knows that there lies a fine line between genius and an madness the same gos for security and technical support. To protect the end users you need to trap the system down to a bare minimum and limit the users access to system resources and communication. this includes firewalls, limited package count, reduced functionality. So how do make the service desk. Around 70% of problems the technician can solve by simply login into the PC with ssh. what about the rest of the 30%? Well to not to compromise the security you cant put up a vnc server to a public net (easily spyable/fakeable passwords and user names go in plain text etc.. etc... etc...). In KDE its simple - Krfb works like a charm and can do basic digest authentication and can be configured to bind to specific interface (preferably localhost[127.0.0.1]). So to see what the user sees the user generates an support request and sends it to the technician and once the technician is finished the session is finished. This will lower the risk from technician on spying on the end user. Simple huh? Well not really - when using gnome it isn't - the gnome is so eccentric to simplicity that it lacks the configuration option, but it provides the server in essence (its called vino btw). To get it to bind to only localhost you need to hack the code, but is doable. the alternative would be to install Krfb with all of its requirements (half of the KDE environment) so it is not practical and can cause conflicts in package relations. To use the vnc you need a vncviewer package and the command to initiate the session is "vncviewr -via <the host name of the PC> localhost" - this will push the vnc data through the SSH tunnel and encrypt it on transport. Security is elemental :)


Have fun experimenting :)


technorati tags:, , , , , ,

Blogged with Flock

Thursday, April 12, 2007

Cedega 6.0 is out...

Reported by /. the announcement and a first review of it. Looks lite there are some improvements but still no good ATI support :( . Well as the review tell you Vista does not come even close to Linux or windows XP so I wonder why Microsoft promotes it so much.

Let the gaming begin :)

technorati tags:, , , , , ,

Blogged with Flock

Wednesday, April 4, 2007

The wise ways of the Gimp...

I'm no big fan of photo editing, but when I need to do something I ether use ImageMagick (for automated image transformations) or Gimp. Basically I'm a noob with google when it comes to images - I have the tools but not enough info how to do what I want.

Well today I made a discovery how easy it is to cut a region into a new image. The reason? Well I needed to cut a smaller image out of screen shot and to hide part of the image text. The problem might seem like an easy one, but when using modern file formats, then there might be security risks, as they use layers. So when you think your brushing up some sensitive information on a image you are just creating a new layer that can simply be removed without any hassle.

So, to make thins short: open the image in gimp, select the region you want with the select tool, right click on the image -> Edit -> Copy (or simply Ctrl+c), right click on the image -> Edit -> Paste as New. Woala, a new image in its container. Now cover the needed part with solid color (I personally recommend solid black). Save the file in bmp format - this will export and flatten the file so we get rid of all the layers that might have been in the original image. This will leave a large file, but you can re save it in jpg format when needed. Now the data should be unrecoverable from the new file. Simple isn't it? :)

technorati tags:, , ,

Blogged with Flock

Friday, March 16, 2007

Why people think linux is expencive...

In todays slashdot there is story about Novell and its vision. Well I did developing a Linux infrastructure based on SLES9, well almost did. This was about a 1..1.5 years back when Novell merged with SuSE Linux. The first 6 months were fine, the autoyast feature was pretty good at profiling servers. The simple XML structure was manageable and debugging was pretty simple. We developed an in house server registration and profiling app that worked pretty well. Creating and managing in house repository was also pretty straight forward, but it lacked the simplicity of apt/yum. When moving a repository you first had to remove the old one and install a new one, instead just modifying the config files and sending them out. The delta rpm feature was usable, but a bit clumsy as it needs the original rpm to witch the delta is applied (for large packages can slow down the server allot - wont recommend doing it on live systems). Here the problems started. First was that the patch file structure was not well defined, so creating in house rpm updates, so the packaging process was slow. Then came the problem that our update script was unable to mirror the repositories to our in house server. For about a month or tow the Novell experts called and came on site and debugged what was wrong. Well after a really depressing day I did some ethical hacking/reverse engineering to find out what is wrong I found out that Novell had switched their server to a Zenworks server. Well... offcorse nothing worked, since the Zenworks server expects special header tags in each request. Uhh ahh.. a day later we had the Zenworks mirror up and running. The problems didn't stop here. The Zenworks repository is completely different from the normal YaST repository. Well after digging into the repository documentation depths I went and a week later and about 1000 rows of perl we had an (almost) working YaST repository. It worked around 4 months and then Novell decided to change something inside and I want able to reverse this code, so out went Novell... Novell offcorse provided us a solution - use Zenworks and they were happy to sell use 80 licenses. why 80 licenses? Well, we have around 80 geographically remote branch offices with low links and if the repository admin would put for example an kernel update to central repository there would be a gap of around 2 days wen all the workstations would have downloaded the package from central repo and that is not acceptable. So out went Novell and in came RedHat Enterprise Linux 4. Another year later we have a working RH infrastructure. So the problem is not with what is cheaper, but what decision to make when. Usually managers are not very tech savvy and base their decisions on the numbers without seeing the big picture as what will it cost to develop and maintain.

As a side note you may ask "Why not go with fully open source, like debian and ubuntu?", well the answer is not that simple. Firstly Debian and Ubuntu are very good distros, I personally have nothing against them, also they are totally free with good community support. Now the downside, they lack profiling methods. For them to mature to enterprise level they firstly need a similar profiling method like autoyast or kickstart. They also need to change their configuration logic a little. Since profiling usually sets some specific config files and applies them to templates to get the real configuration files. This is not a very big change but still.. So far I have seen "automated" installations based on script files run while installing - a one mistake will take out the whole install, or create partially configured servers/workstations. The other is that most enterprise level Linux developers are used to a RPM package format and debian uses DEB - so there will be a big learning curve to relearn how to package debs.

So for the conclusion: Windows is not cheaper than Enterprise Linux's - everything depends on the needs of the enterprise and the communication between the managers and the people that will develop and run them. Also a developer/head developer and an administrator/head administrator needs to be included when discussing the purchases of new platform as they usually ask questions that will make it a success or a failure. Also don't make rash decisions based just on the numbers.


technorati tags:, ,

Blogged with Flock

Monday, March 12, 2007

spec changelog made easy...

Are you a rpm maintainer? do you update your spec files often? Have you got tiered of remembering what day it is? Getting tiered of writing your name and mail to rpm change log?

If you answered yes to 2 of the questions you are in luck :) I maintain several rpm's and I always forget what day it is when I need to update the change log. After loosing several valuable seconds of time figuring it out I think that I have forgotten some things I need to add to the change log.

Well here is a cure - as my main spec writing tool is vim(-enhanced :) I added this little snippet to my ~/.vimrc


:command -nargs=0 V : :r!date +"* \%a \%b \%d \%Y My Name <my.name@my.domain>\%n- "

No more annoying date errors/name errors - need new change log entry just type :V and your set :)


technorati tags:

Blogged with Flock