Google Search

Google

Wednesday, April 25, 2007

Lightning Mutiweek fix in 60 seconds...

As I really cant live without it I decided to hack it. Here is my little hack step by step as it may be useful to someone until it gets fixed by the author. As I use Linux the hack is made under SuSE 10.0 Pro.

Not to create useless stuff in the home directory I recommend using the /tmp directory:

vahur@too:~> cd /tmp/
vahur@too:/tmp> mkdir lm
vahur@too:/tmp> cd lm
vahur@too:/tmp/lm> wget --no-check-certificate https://addons.mozilla.org/en-US/thunderbird/downloads/file/12248/lightning_multiweek_view-0.0.2-tb.xpi
vahur@too:/tmp/lm> unzip -x lightning_multiweek_view-0.0.2-tb.xpi
vahur@too:/tmp/lm> dos2unix install.rdf
vahur@too:/tmp/lm> cat install.rdf |sed -e "s#>2.0b1<#>2.0.0.*<#" > install.rdf
vahur@too:/tmp/lm> zip lm.xpi chrome/ltnmw.jar chrome.manifest install.rdf

Here is the long version: I create a folder called "lm" in the tmp directory, download the original xpi from the mozilla site with wget, extract all the files with unzip, as the file was created under windows we need to convert it to unix before editing (lowers the risks of random character interfering), replace the old version with new one with stream editor sed and the last thing is to zip it back up :)

Thats it - just install it as a normal extension in Thunderbird. Works 95% - may mess up preferences menu a litle- use at your own risk.

I wish all my work would be that easy...

technorati tags:, , , , ,

Blogged with Flock

Tuesday, April 24, 2007

x86 on a POWER...

It seems like IBM has few tricks in their sleeves. Normally there would not be an efficient way to run Linux on their mainframes thanks to their fully hot-swap architecture. Fried a CPU or a RAM? Maybe even a mainboard? No problem - they are hot swappable :)

More info on the architecture here. Normal small to midrange enterprises usually don't need them as they can deploy grid architecture, but there is wide range of services that need to maintain around 99.9+% of uptime with high loads. Here comes the System p to fix most of the problems. As grids are powerful, they have some shortcomings. They don't provide enough power for high end calculations as they usually are linked with up to 1gbit interfaces - the system P is on the other hand interlinked with maximum theoretical bandwidth of 30gbit or more interlinks. It also provides the hot swap for all components (well almost all) - I haven't looked into System p design for a long time, but the only part that was not hot swap was the interconnect plate around 3 years ago. Usually these system sun without problems (after they have been set up properly) 6+ years and when the crash happens its total as people who set the system up most likely have moved up the food chain and got a better job or just wont remember how to set them up. Here is where the grid is more fault tolerant - you can loose some of the nodes without any impact on overall performance (and I don't mean "2 pc clusters"). Grid hardware is also cheaper (usually normal PC's or racked server or blades) interconnected with ether 1gbit full duplex single mode fiber, 10gbit multi mode dark fiber, or when the requirements are high enough a profession interconnect.

Anyone wanna donate an IBM System p5 595 with 64x2.3GHZ CPU, 2TB RAM and a 28.1TB storage node for hmm... testing purposes? :)


technorati tags:, , , , , , , ,

Blogged with Flock

Friday, April 20, 2007

CentOS 5.0 is out...

The long waited release has happened - CentOS 5.0 is officially out!

The good news is that the base packages are recompiled and are out - extras/addons repos are still empty. the kernel is fairly new 2.6.18 this means the OS has SAS support - yehaa :) Since most IBM 366 are running low on supply, the new IBM 3550 are in (well, it still needs testing a bit, but it at least boots up properly) :) This is a huge step forward but there is still a long way to go. This might give some idea on how long way...


Out of curiosity I tried to upgrade one of my 4.4 servers to 5.0, but it did not go well. After removing around 40 packages I finally got yum to upgrade the kernel and the rest of the system. Well as a result I found out that the upgrade process died in the middle of the process. Rpm had removed its libs and tried to install new ones - it crashed with unresolved deps. Last test was to boot the machine, and it didn't go well also - it crashed on a attempt to load LSI Fusion MPT driver :( As it was a test the results are not promising. Also the lack of apt and apt support for repos (I really really hate yum and up2date). Yum is too talkative and generates several race conditions wen doing the upgrade, so it lacks the needed silence to do scripting!

The conclusion - It works, looks better than 4.4, is more resource hungry (at least 512MB to do a graphical install), lack of support for apt and empty extras-, conttrib-, plus repos, boots faster (didn't take time with stop watch, but it looked faster at least). This means its to crude for enterprise wide adoption, but it shows great promise, next tests will probably be done in few months to see if it has improved.

PS! if any CentOS developers see this post, PLEAS PLEAS PLEAS port apt to CentOS 5, yum just does not cut it and alternatives are always good to have.

technorati tags:, , , , , , , ,

Blogged with Flock

Friday, April 13, 2007

Security vs tech support...

Each system architect knows that there lies a fine line between genius and an madness the same gos for security and technical support. To protect the end users you need to trap the system down to a bare minimum and limit the users access to system resources and communication. this includes firewalls, limited package count, reduced functionality. So how do make the service desk. Around 70% of problems the technician can solve by simply login into the PC with ssh. what about the rest of the 30%? Well to not to compromise the security you cant put up a vnc server to a public net (easily spyable/fakeable passwords and user names go in plain text etc.. etc... etc...). In KDE its simple - Krfb works like a charm and can do basic digest authentication and can be configured to bind to specific interface (preferably localhost[127.0.0.1]). So to see what the user sees the user generates an support request and sends it to the technician and once the technician is finished the session is finished. This will lower the risk from technician on spying on the end user. Simple huh? Well not really - when using gnome it isn't - the gnome is so eccentric to simplicity that it lacks the configuration option, but it provides the server in essence (its called vino btw). To get it to bind to only localhost you need to hack the code, but is doable. the alternative would be to install Krfb with all of its requirements (half of the KDE environment) so it is not practical and can cause conflicts in package relations. To use the vnc you need a vncviewer package and the command to initiate the session is "vncviewr -via <the host name of the PC> localhost" - this will push the vnc data through the SSH tunnel and encrypt it on transport. Security is elemental :)


Have fun experimenting :)


technorati tags:, , , , , ,

Blogged with Flock

Thursday, April 12, 2007

Cedega 6.0 is out...

Reported by /. the announcement and a first review of it. Looks lite there are some improvements but still no good ATI support :( . Well as the review tell you Vista does not come even close to Linux or windows XP so I wonder why Microsoft promotes it so much.

Let the gaming begin :)

technorati tags:, , , , , ,

Blogged with Flock

Wednesday, April 4, 2007

The wise ways of the Gimp...

I'm no big fan of photo editing, but when I need to do something I ether use ImageMagick (for automated image transformations) or Gimp. Basically I'm a noob with google when it comes to images - I have the tools but not enough info how to do what I want.

Well today I made a discovery how easy it is to cut a region into a new image. The reason? Well I needed to cut a smaller image out of screen shot and to hide part of the image text. The problem might seem like an easy one, but when using modern file formats, then there might be security risks, as they use layers. So when you think your brushing up some sensitive information on a image you are just creating a new layer that can simply be removed without any hassle.

So, to make thins short: open the image in gimp, select the region you want with the select tool, right click on the image -> Edit -> Copy (or simply Ctrl+c), right click on the image -> Edit -> Paste as New. Woala, a new image in its container. Now cover the needed part with solid color (I personally recommend solid black). Save the file in bmp format - this will export and flatten the file so we get rid of all the layers that might have been in the original image. This will leave a large file, but you can re save it in jpg format when needed. Now the data should be unrecoverable from the new file. Simple isn't it? :)

technorati tags:, , ,

Blogged with Flock