...making Linux just a little more fun!
Hello, world!
(Especially that subset that contains the LG readership.
We have a big Mailbag for you this month - we've finally caught up!
Thanks to a lot of fantastic work on Ben's part, and a lot of troubleshooting (and creative breaking) on mine, the Mailbag also has a hot new look. We'd love to hear feedback from you on your opinion of it.
-- Kat Tanaka Okopnik, Mailbag Editor
Benjamin A. Okopnik [ben at linuxgazette.net]
Thu, 12 Oct 2006 11:15:19 -0400
Hi, all -
I've just put up an "LG projects" page (it's not yet linked anywhere); please take a look at it, let me know what you think. I've found it tremendously useful already for "externalizing" the project ideas I've been carrying around in my head for a long time - now, I don't have to remember all that stuff any more (whee!!!)
[ In service to anthropology, I've added The Missing Link. -- Kat ]
http://linuxgazette.net/jobs.html
I'd appreciate any comments on improvements, changes, etc.
* Ben Okopnik * Editor-in-Chief, Linux Gazette * http://LinuxGazette.NET *
[ Discussion continued (5 messages/3.27kB) ]
Bob van der Poel [bvdp at xplornet.com]
Thu, 07 Sep 2006 19:04:47 -0700
Or maybe the subject should read "I'm cheap, I want a host, but don't want to get ripped off and feel poorly about my choice".
Seriously, I've been looking at a bunch of low-cost hosting services. There are a whole bunch in the sub-$5.00 per month range ... and they all seem to be pretty much equal. Some of the review sites even give them decent ratings.
But, for everyone which a decent rating it's not hard to find a "never, ever use this host" review.
So, any of you folks using a cheap host you're happy with?
Oh, this is just for my personal stuff. I think I need less than 100meg storage and a few gig/month bandwidth. I can register my own domain, or get the free one included in most of the packages (is this a good/bad idea).
Hopefully someone has some experience to share.
-- Bob van der Poel ** Wynndel, British Columbia, CANADA ** EMAIL: bvdp at xplornet.com WWW: http://users.xplornet.com/~bvdp
Himanshu Thappa [himanshut at KPITCummins.com]
Sat, 16 Sep 2006 02:10:33 +0530
Hi James
Plz tell me what LD_Library_path does?tel me asap.
With Regards
Himanshu Thappa Architecture Engineer KPIT Cummins GBS Ltd. Ext No. 5648 Mob. 09881407689
[ Discussion continued (12 messages/13.45kB) ]
Benjamin A. Okopnik [ben at linuxgazette.net]
Thu, 31 Aug 2006 21:16:29 -0400
[I wrote this earlier, but it didn't go out then. Posting it anyway, despite Peter having solved it, since I figure it'll be useful to our readers.]
On Thu, Aug 31, 2006 at 11:23:23AM -0700, Peter Knaggs wrote:
[ skipping the GMail question ]
> Another question: I've come across this after updating > Debian testing: I seem to be loosing fonts, or at least > the helvetica font I need for vncviewer and imagemagic: > > For example, "display image.jpg" gives me: > > display: unable to load font > `-*-helvetica-medium-r-normal--12-*-*-*-*-*-iso8859-1'. > display: unable to load font > `-*-helvetica-medium-r-normal--12-*-*-*-*-*-iso8859-1'. > > > And vncviewer gives me this: > > $ vncviewer wherever:2 > VNC server supports protocol version 3.7 (viewer 3.3) > Password: > VNC authentication succeeded > Desktop name "wherever:2 (myusername)" > Connected to VNC server, using protocol version 3.3 > VNC server default format: > 16 bits per pixel. > Least significant byte first in each pixel. > True colour: max red 31 green 63 blue 31, shift red 11 green 5 blue 0 > Warning: Cannot convert string > "-*-helvetica-bold-r-*-*-16-*-*-*-*-*-*-*" to type FontStruct > Warning: Unable to load any usable ISO8859 font > Warning: Unable to load any usable ISO8859 font > Warning: Missing charsets in String to FontSet conversion > Warning: Unable to load any usable fontset > Error: Aborting: no font found > > > I tried searching for explanations, and as far as > I can tell I've got all the font packages installed.
First, I'd suggest checking to see if you system agrees with you about that.
ben at Fenrir:~$ xlsfonts -fn "-*-helvetica-bold-r-*-*-16-*-*-*-*-*-*-*" -adobe-helvetica-bold-r-normal--16-116-100-100-p-0-iso10646-1 -adobe-helvetica-bold-r-normal--16-116-100-100-p-0-iso10646-1 -adobe-helvetica-bold-r-normal--16-116-100-100-p-0-iso10646-1 -adobe-helvetica-bold-r-normal--16-116-100-100-p-0-iso10646-1 -adobe-helvetica-bold-r-normal--16-116-100-100-p-0-iso8859-1 -adobe-helvetica-bold-r-normal--16-116-100-100-p-0-iso8859-1 -adobe-helvetica-bold-r-normal--16-116-100-100-p-0-iso8859-1 -adobe-helvetica-bold-r-normal--16-116-100-100-p-0-iso8859-1 -cronyx-helvetica-bold-r-normal--16-116-100-100-p-0-koi8-r ben at Fenrir:~$ xlsfonts -fn '-*-helvetica-medium-r-normal--12-*-*-*-*-*-iso8859-1' -adobe-helvetica-medium-r-normal--12-120-75-75-p-67-iso8859-1 -adobe-helvetica-medium-r-normal--12-120-75-75-p-67-iso8859-1 -adobe-helvetica-medium-r-normal--12-87-100-100-p-0-iso8859-1 -adobe-helvetica-medium-r-normal--12-87-100-100-p-0-iso8859-1 -adobe-helvetica-medium-r-normal--12-87-100-100-p-0-iso8859-1 -adobe-helvetica-medium-r-normal--12-87-100-100-p-0-iso8859-1Mine certainly recognizes those patterns, and can match them from the installed fonts (which is why, I suppose, I don't have that problem.
If yours can't - and I suspect that this is what you'll find - then you need to do a little investigative work to find out what's happening. First, you'll need to find out what X considers your font directories:
[ ... ]
[ Discussion continued (7 messages/13.59kB) ]
k.Ravikishore [ravikishore.k at hclsystems.in]
Sat, 23 Sep 2006 16:35:28 +0530
How to create a bash shell script that removes all files whose names end with a "~" from your home directory and subdirectories.
----------------------------- HCL Systems, Hyderabad, India
[ Discussion continued (2 messages/2.50kB) ]
Mike Orr [sluggoster at gmail.com]
Thu, 26 Oct 2006 10:11:15 -0700
[For the Mailbag, regarding my Aug 9 letter about a Nokia tablet article.]
The more I looked into the Nokia Internet Tablet 770, the more I became concerned about its speed, capacity, and cost of add-ons I considered essential. I finally ended up going to the dark side and getting a Macintosh laptop. [1] So if anybody wants to do an article on the Nokia or on Linux or Open Source use in palmtops in general, we're looking for one.
[1] Well, it's not that dark, it's only twilight compared to a certain other OS. But it was strange reading a 6-page license agreement when I hadn't used proprietary software for nine years. And I still use Linux at work.
-- Mike Orr <sluggoster at gmail.com>
[ Discussion continued (4 messages/4.82kB) ]
Suramya Tomar [suramya at suramya.com]
Tue, 10 Oct 2006 17:04:22 -0400
Hey Everyone, Got the questions below via email and I was hoping one of you might have an answer for him (The stuff in brackets are my questions):
1. If /tmp partition is mounted with noexec and nosuid flags, is it not possible to run ./configure? If so how can we bypass this?
(I remember reading somewhere that mounting /tmp with noexec and nosuid is a good security precaution, but if it causes trouble with the ./configure then is it worth it?)
2. How to modify (RPM - atrpms.net) installation of FFMPEG to include the amr_nb / amr_wb fixes in order for me to be able to convert 3GPP video to FLV.
Thanks for the help.
- Suramya
-------- Original Message --------
Subject: Re: FFMPEG Installation Date: Tue, 10 Oct 2006 17:57:20 +0100 From: <markw2@fireflyuk.net> To: TAG <tag@lists.linuxgazette.net> To: Suramya Tomar <suramya at suramya.com>References: <004901c6ea0e$05b24a10$0502a8c0 at MARKDESKTOP> <452BC78B.9040008 at suramya.com>
Hi Suramya, thanks for your reply. It's really appreciated.
Please feel free to forward my e-mail to this group as it would be great to have a solution to this.
Thanks again and best regards
Mark
PS. Would you know of any guide/tutorial that explains how to create/modify RPM's, where files are stored and where spec files can be found, once they are installed on a system?
----- Original Message -----
From: "Suramya Tomar" <suramya@suramya.com> To: TAG <tag@lists.linuxgazette.net> To: <markw2 at fireflyuk.net>Sent: Tuesday, October 10, 2006 5:17 PM
Subject: Re: FFMPEG Installation
[ ... ]
[ Discussion continued (2 messages/8.07kB) ]
clarjon1 [clarjon1 at gmail.com]
Tue, 1 Aug 2006 20:50:57 -0400
Hello, gang! I've worked on my perl program a bit, and added stuff like command line switches to it. I've gotten it set up so that, if I don't enter a switch, it will display the contents of the 'calendar' file, and if there is a command line switch, it won't. Nothing fancy...
I've attatched it, because I don't want to spend a lot of time stripping out a lot of comments. Hope you don't mind. Here's what I want to do (other than update the TODO in it):
1) More interactive: Nothing like Ncurses, just something along the lines of if I add my -A switch (for Add), and I don't specify any input, that it will allow me to enter input rather than just add a blank line. 2) Make it use arrays! That is, be able to read arrays from disk. Or would I just be better off telling it to use postgres as a database? Either way, I don't know what to do. 3) Make it able to search for a specific item, and/or sort by specific items. Very useful, dunno if it's worth the time and effort, really.
That's all I can think of at the moment.
Oh yeah, the announcement!
I'm going to Los Angeles, California on the 8th this month, to attend the DCLA conference there. I'm going to be in a plane on Monday! Very nervous, I am. I believe that we will be driving a 2hr drive down to Toronto, Ontario, and then taking a flight from there all the way to LA (hope I get a window seat!!)
Any LG people in LA? Maybe we could meet (unlikely, but would be nice :D)
Apparently, the hotel is nearby the conference (a Hilton, I've heard)
* Clarjon1 at jon.clarjon1.linux goes off, being excited...
[ Discussion continued (6 messages/10.84kB) ]
Neil Youngman [ny at youngman.org.uk]
Sun, 1 Oct 2006 19:49:09 +0100
Well I've had enough of the creeping bit rot in my Mepis installation and I want to be rid of Mepis. I'm trying to go back to a plain vanilla Debian installation. I've also recently upgraded my system with a SATA conroller and a 200GB hard disk, onto which I tried to install Debian with the Debian 3.1 net installer.
The default install hung, so I went for the "expert" install and got an installation on the SATA disk. It's definitely there, I can see it, but I can't boot from it.
It's on /dev/sda5, and grub loads the kernel up, but the kernel panics because as far as it's concerned /dev/sda5 doesn't exist. I can only assume that the kernel it has installed doesn't have the right module (SiI3112) installed for the SATA controller.
Is there a way to check what modules are built into this kernel?
Is there a simple way around this?
Is it best just to build my own kernel to replace the one that has been installed?
Neil Youngman
[ Discussion continued (26 messages/45.25kB) ]
Brian Sydney Jathanna [briansydney at gmail.com]
Tue, 26 Sep 2006 14:56:13 +1000
Hi,
I am facing a problem with one of my services which needs to be constantly monitored and restarted in case it dies. I was just wondering if there is a command / program / script which can be placed in crontab to monitor a process and restart it if its dead. Thanks in advance.
Brian.
[ Discussion continued (6 messages/7.64kB) ]
Rick Moen [rick at linuxmafia.com]
Mon, 14 Aug 2006 00:56:47 -0700
Hmm, John's post got held by Mailman, claiming that SpamAssassin had marked it as "possible spam". Let's have a look at what got into Mailman and SpamAssassin's tiny little brains:
Received: from [201.245.212.45] (port=33475 helo=localhost.localdomain) by linuxmafia.com with esmtp (Exim 4.61 #1 (EximConfig 2.0)) id 1GCMEs-0005t8-Hg for <tag at lists.linuxgazette.net>; Sun, 13 Aug 2006 13:07:21 -0700 Received: by localhost.localdomain (Postfix, from userid 1000) id 371D323055; Sun, 13 Aug 2006 15:07:01 -0500 (COT) Received: from localhost (localhost [127.0.0.1]) by localhost.localdomain (Postfix) with ESMTP id 31E942303E; Sun, 13 Aug 2006 15:07:01 -0500 (COT) Date: Sun, 13 Aug 2006 15:07:01 -0500 (COT) From: John Karns <jkarns@etb.net.co> To: TAG <tag@lists.linuxgazette.net> X-X-Sender: jkarns at localhost.localdomain To: jeff at jeffroot.us cc: tag at lists.linuxgazette.net In-Reply-To: <17630.47578.208478.397536 at localhost.localdomain> Message-ID: <Pine.LNX.4.61.0608131345520.21008 at localhost.localdomain> References: <17621.16287.466717.206264 at localhost.localdomain> <20060806022547.GA3848 at linuxgazette.net> <17621.34053.297464.620391 at localhost.localdomain> <20060807030821.GA3903 at linuxgazette.net> <Pine.LNX.4.61.0608091621130.12020 at localhost.localdomain> <20060809214806.GA4892 at linuxgazette.net> <Pine.LNX.4.61.0608121407330.836 at localhost.localdomain> <17630.47578.208478.397536 at localhost.localdomain> MIME-Version: 1.0 X-SA-Do-Not-Run: Yes X-EximConfig: v2.0 on linuxmafia.com (http://www.jcdigita.com/eximconfig) X-SA-Exim-Connect-IP: 201.245.212.45 X-SA-Exim-Mail-From: jkarns at etb.net.co X-Spam-Checker-Version: SpamAssassin 3.1.1 (2006-03-10) on linuxmafia.com X-Spam-Level: * X-Spam-Status: No, score=3.5 required=4.0 tests=AWL,BAYES_00,FORGED_RCVD_HELO, RCVD_IN_DSBL,RCVD_IN_DYNABLOCK,RCVD_IN_SORBS,RCVD_IN_SORBS_DUL autolearn=no version=3.1.1 Subject: Re: [TAG] Talkback:127/howell.html Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed X-SA-Exim-Version: 4.2.1 (built Mon, 27 Mar 2006 13:42:28 +0200) X-SA-Exim-Scanned: Yes (on linuxmafia.com)The weird thing is, it was Mailman that objected to your message and held it for my manual approval, claiming that SpamAssassin had flagged it as "possible spam" -- yet, as you can see, SA's score was 3.5, well below the 4.0 spamicity threshold I set in SpamAssassin. I'm not sure what's going on there.
In any event, spamicity = 3.5 is eyebrow-raising enough in itself, so let's see what all those failed tests in the X-Spam-Status line are:[1]
AWL: Auto-WhiteList. This is a simple "address or IP that has been heard from in the somewhat recent past" database, giving ones not heard from recently a small boost to the maybe-distrust-this spamicity score.
BAYES_00: A "Bayesian" statistical test on the body text. The "BAYES_00" result means that the Bayesian estimate of probability is that there's only a 0-1% likelihood of your post being spam, and that result actually reduces the post's spamicity score.
[ ... ]
[ Discussion continued (3 messages/9.40kB) ]
Benjamin A. Okopnik [ben at linuxgazette.net]
Mon, 4 Sep 2006 12:44:39 -0400
On Mon, Sep 04, 2006 at 02:19:37PM +0530, Kapil Hari Paranjape wrote:
> Hello, > > Some news. > > I suppose this was bound to happen sooner or later. Debian > maintainer's J. Jaspert and E. Bloch lost patience with J. Schilling > and have forked "cdrtools" to create cdrkit. Some details at > > http://debburn.alioth.debian.org/FORK > > (The forked tar.gz can be found in http://debburn.alioth.debian.org/). > Other reasons for the fork can be found on > http://bugs.debian.org/cdrecord and on the Linux Kernel mailing lists.
Good for them and everyone else, I say. I've been struggling with (and quietly cursing at) Joerg Schilling's DVD-writing software for a long time. On the one hand, it's the only DVD-writer that I could get to work on this strange DVD drive I have (Matshita DVD-RAM UJ-820S) - but it would only let me write at 1x due to nothing more than some strange conceit of the author's. As I recall from his explanation on a web page, he had decided that the Linux /dev implementation sucks, and until it was rewritten to be more like that of BSD, he wouldn't do anything to make 'cdrecord' work reasonably. Elsewhere, in every instance that I've seen him involved in a discussion about any technical issue, I was struck by his inflexibility ("bull-headedness" would be too strong of a term, since he is highly technically competent, but the stubborn refusal to even consider any viewpoint other than his own was... less than admirable.)
Now, according to the Debian bunch, he's exhibiting that same kind of intransigence and blind adherence in an area which is clearly not his strong point - licensing issues. [shrug] His right as the author, of course... but this is exactly the reason that forking is such a useful method. This is, in my opinion, as good as Open Source gets.
(BTW: has anyone else noticed that the largest, toughest, most dangerous monster in QuakeII is called 'Jorg'? I'm just sayin'. ))
* Ben Okopnik * Editor-in-Chief, Linux Gazette * http://LinuxGazette.NET *
[ Discussion continued (2 messages/5.36kB) ]
Benjamin A. Okopnik [ben at linuxgazette.net]
Thu, 14 Sep 2006 14:53:08 -0400
Hi, Steve -
On Thu, Sep 14, 2006 at 11:04:27AM -0700, srankin wrote:
> Hi Ben > I've started using Ubuntu which is so great compared to Windoze and if I > can manage to run some navigational software I could eliminate Windoze > altogether. (It would be like getting rid of an abcessed tooth). Any > ideas? Will Wine run Nobeltec? Or do you know of any Nav software > available in open source?
I've actually posted about this on the 'origamiboats' list a while back, but I'll repeat it and expand on it a bit. If you don't mind, I'm also going to CC this exchange to The Answer Gang at the Linux Gazette; this is actually a question that I get asked on a regular basis by other sailors who use Linux, and I believe a number of people could benefit from the answer.
I've checked out a number of programs intended for navigational use under Linux; all of these have been useful to some degree, and some, like SeeMyDEnc (http://www.sevencs.com/index.php?page=123) are becoming more and more useful day by day, as NOAA and other chart-producing agencies convert more and more of their charts to the S-57 and other modern charting formats; in fact, S-57 charts and viewers have become so good that they are now treated as a legal equivalent of paper charts in commercial shipping regulations (!). These converted charts, incidentally, are available free of charge at http://chartmaker.ncd.noaa.gov/ - a service that's worth thousands of dollars to cruisers, given the average cost of paper charts.
Anyway, the two programs that I use most of all in my navigation are 'xtide' and Mayko's 'mxmap'; the former shows a list of currents and tides for any location in the world, while the latter is a very featureful chart viewer. 'mxmap' reads BSB charts, does GPS tracking, allows you to construct/follow routes, set markers, "scribble" on the charts, and do lots of other goodies. It also allows you to use, e.g., a scan of a map or a chart - you just set the lat/long of diagonally opposing corners, and away you go.
The only problem with 'mxmap' is that it is unmaintained; the developers (Mayko), as far as anyone seems able to tell, have disappeared off the face of the earth leaving us with this really nice piece of software. Maybe they went cruising.
Since Ubuntu is Debian-based, you should be able to install 'xtide' via the standard installation mechanism ('apt-get install xtide' as root); "xmap" can be found at http://fresh.t-systems-sfr.com/linux/src/ (look for three files with 'xmap' in the name - one of them is a bunch of sample maps, the other two are static and dynamic versions of the program.)
* Ben Okopnik * Editor-in-Chief, Linux Gazette * http://LinuxGazette.NET *
[ Discussion continued (6 messages/6.93kB) ]
Faber J. Fedor [faber at linuxnj.com]
Wed, 6 Sep 2006 11:11:07 -0400
Ben's response to this question was published as http://linuxgazette.net/131/lg_tips.html#2-cent-tips.05 --Kat
Does anyone know of an app/script that will automagically set ID3 tags in my MP3s files using freedb?I got one of those pod thingies that are so popular with the kids these days and discovered that quite a few of my MP3s don't have ID3 tags on them. I have no idea how that happened since I've ripped every CD with grip (which means my MP3s directory hierarchy follow the standard format of artist/album/song.mp3).
Googling "mass edit mp3s" brings up mostly Windows apps. I gather most of those will let me makes changes to ID3 tags across a group of MP3s (say changing "Bush, Kate" to "Kate Bush" en masse) as opposed to what I want (look up the info on freedb and apply that info to the MP3s).
I did come across something on IBM Developer Works that uses a couple of Perl modules (we now have Ben's attention!) and freedb but I couldn't get it to work properly, meaning a) I couldn't understand the docs and b) I couldn't redirect the output to a file to see wtf it was doing.
Since I don't want to rip and encode all of those CDs again, I'll happily roll my own, but I thought I'd check with you guys to see if I've missed an existing solution.
-- Regards, Faber Fedor President Linux New Jersey, Inc. 908-320-0357 800-706-0701
[ Discussion continued (3 messages/3.26kB) ]
John Karns [johnkarns at gmail.com]
Thu, 7 Sep 2006 08:56:21 -0500
Hi All,
I'm getting lots of bounce notices regarding incoming mail from gmail. I'm baffled. Would this be generated by my host?
1) I'm popping with fetchmail.
2) Due to postfix (yuck) sending mail to the bit-bucket, I've tried bypassing it and specifying procmail as the mda in .fetchmailrc.
I don't understand why it cites failure to establish an smtp connection with jkarns at localhost:
TEMP_FAILURE: Could not initiate SMTP conversation with any hosts: [localhost (1): Connection refused]
It would make a little more sense to me to see msgs like this being generated by postfix on my host, but don't see what POPping from gmail should be related in any way to an smtp connection.
Any suggestions / help much appreciated.
Here's an example:
[ ... ]
[ Discussion continued (9 messages/19.42kB) ]
Kapil Hari Paranjape [kapil at imsc.res.in]
Sun, 10 Sep 2006 14:53:20 +0530
On Fri, 08 Sep 2006, Benjamin A. Okopnik wrote:
> On Wed, Sep 06, 2006 at 07:53:12PM -0500, John Karns wrote: > > > [1]* works wonderfully on this aging Dell I8100, best I've ever had a > > laptop run under Linux - at the end of the day I just suspend to RAM - > > haven't done a shutdown or reboot now in 14 days! Probably mostly thanks > > to the improvements in the kernel suspend code, but they seem to have the > > ACPI scripting functioning very well too. Hibernate hasn't proven to be > > quite as smooth though. > > Mine just don't work, period. Bleagh. :((( To the best of my ability to > figure it out, the ACPI on this Acer 2012 is so horrendously broken > that it's not even worth trying to fix (although I'd downloaded Intel's > ACPI compiler/decompiler, dutifully fixed all the errors, and shoved it > all back in, it didn't seem to make any difference.) Well, this laptop > is getting toward the end of its useful life... we'll see how the next > one goes.
Continuing my experiments with suspend/hibernate etc...
For 2.6.17-rc1 upwards you should try out "uswsusp" which tries to sort out problems with prior suspend(s).
My current situation is quite happy vis-a-vis both suspend to disk suspend to ram. I use the "stock" Debian linux-image-2.6.17-2-686 (version 2.6.17-8) and initramfs-tools (version 0.77b) with aforementioned uswsusp (0.2-3).
Everything "works" out of the box except for a minor hiccough with suspend-to-ram which required some hacking as follows.
a. Save current device config to disk cat /proc/bus/pci/00/02.0 > /var/lib/acpi/vid_0 cat /proc/bus/pci/00/02.1 > /var/lib/acpi/vid_1 b. Suspend-to-ram echo -n mem > /sys/power/state c. Restore device state on resume cat > /proc/bus/pci/00/02.0 < /var/lib/acpi/vid_0 cat > /proc/bus/pci/00/02.1 < /var/lib/acpi/vid_1The developers of suspend are undecided on whether they should wait for the kernel to fix this or just make the change to s2ram since the problem seems to be for specific hardware combinations like the thinkpad R51 with intel graphics (or perhaps only for that combination!).
I should perhaps also mention that suspend worked fine on my laptop with Ubuntu Dapper which I tested out.
Generally, I have found that "suspend-to-disk" works out-of-the-box with all laptops that I have come across; "suspend-to-ram" seems more tricky. Given my experience, I would say that Ben is singularly unlucky :-(
Regards,
Kapil. --
[ Discussion continued (3 messages/8.71kB) ]
barb [jojodancer1 at cox.net]
Tue, 29 Aug 2006 16:05:03 -0700
I have assigned the subject for this thread, as it came in with none. --Kat
hi, if I download a entire web site and burn it onto my cd, later on if the site is not availablecan I bring it up on my cd that I burned earlier? barb
[ Discussion continued (2 messages/2.38kB) ]
Faber Fedor [faber at linuxnj.com]
Fri, 15 Sep 2006 14:20:08 -0400
On 9/15/06, Bradley Chapman <kakadu at gmail.com> wrote:
> > Recently I decided to take the plunge and enable SSH on my firewall > machine, to allow me to get into it remotely. Having done so, I'm now > agonizing over whether or not I've configured it correctly.
Send us your IP Address and the root password and we'll let you know. Just kidding!
Everything looks fine to me. I would suggest you move the default port to another address: something high (< 64000) and random. A cracker seeing something open on port 22 will do an SSH attack, but on port 54256 he won't know what program to use.
> So far as I can tell, I have asymmetric public-private key > authentication working correctly, but I am still asked for the account > password when I SSH into the machine.
IIUC, I think it's asking for your passphrase, the one you used to generate the key-pair, no? To get around that, you have to generate keys with no pass-phrase (which is considered A Bad Thing).
Not only that, but despite
> setting PermitRootLogin to 'no', and AllowUsers to 'user' (the name of > the account I set up), when attempting to login as either root or any > other user on the machine, the ssh client simply asks for the account > password three times and then fails, instead of failing immediately - > is it supposed to do that?
Yes, it's supposed to do that. With that behaviour (prompting for the password three times), the cracker isn't sure if A) root logins are disabled or B) he has the wrong password. If it failed immediately, he would know that A was true. Anything to slow the little buggers down.
> TIA,
HTH
--
Regards,
Faber Fedor Linux New Jersey, Inc. 908-320-0357 http://www.linuxnj.com
[ Discussion continued (5 messages/8.96kB) ]
Chanchal Mitra [ck.mitra at gmail.com]
Sun, 1 Oct 2006 22:56:16 +0530
Hi
You know what I mean. I have only one OS setup on my harddisk and I have no use for grub or lilo. How do I boot directly into linux?
I noticed in the kernel sources there is a file named bootsector. I cannot find any information on how to use it.
It must be simple to do but the question is how?
I am using fedora core 5, arch: x86_64. All updated using yum.
Thanks in advance.
Chami
[ Discussion continued (2 messages/2.30kB) ]
Bob van der Poel [bob at mellowood.ca]
Mon, 09 Oct 2006 17:19:08 -0700
Hi. I've dl'd and installed the IE6 running under Wine from http://www.tatanka.com.br/ies4linux/page/Main_Page This is useful step in testing web page development ... I don't care enough about it to actually test with a real windows machine, but if I can test locally using IE6 it is not a big deal.
Under gnome or KDe it works just fine. But, it crashed (or stalls) under icewm. I have tried to start from a terminal and from the toolbar. Under a terminal I get the Wine debug prompt, and a "loading" window. But that is about it.
I am thinking there is a path difference or something. But, I have checked and don't see and obvious differences.
I have checked on the tatanka page and don't see anything on this topic. And a request to the IceWM users list has drawn a blank as well. Maybe one of you guys has an idea on this?
-- Bob van der Poel ** Wynndel, British Columbia, CANADA ** EMAIL: bob at mellowood.ca WWW: http://www.mellowood.ca
[ Discussion continued (3 messages/3.90kB) ]
Peter Knaggs [peter.knaggs at gmail.com]
Thu, 31 Aug 2006 11:23:23 -0700
Hi All,
Has anyone else come across this gmail "spyware" page when trying to log into gmail using Firefox?
[ Snipped image ]
As you can imagine, it plays havoc with the gmail notifier extension, as well as making me doubt my ability to read, each time I try to log in. (Trying to type in the contents of the squiggly message displayed in the box seems to be a task my brain is almost incapable of handling early in the morning). So I'm wondering, is this something Firefox is doing to annoy gmail? Or just something gmail has started doing to annoy all their users?
Another question: I've come across this after updating Debian testing: I seem to be loosing fonts, or at least the helvetica font I need for vncviewer and imagemagic:
For example, "display image.jpg" gives me:
display: unable to load font `-*-helvetica-medium-r-normal--12-*-*-*-*-*-iso8859-1'. display: unable to load font `-*-helvetica-medium-r-normal--12-*-*-*-*-*-iso8859-1'.And vncviewer gives me this:
$ vncviewer wherever:2 VNC server supports protocol version 3.7 (viewer 3.3) Password: VNC authentication succeeded Desktop name "wherever:2 (myusername)" Connected to VNC server, using protocol version 3.3 VNC server default format: 16 bits per pixel. Least significant byte first in each pixel. True colour: max red 31 green 63 blue 31, shift red 11 green 5 blue 0 Warning: Cannot convert string "-*-helvetica-bold-r-*-*-16-*-*-*-*-*-*-*" to type FontStruct Warning: Unable to load any usable ISO8859 font Warning: Unable to load any usable ISO8859 font Warning: Missing charsets in String to FontSet conversion Warning: Unable to load any usable fontset Error: Aborting: no font foundI tried searching for explanations, and as far as I can tell I've got all the font packages installed. I attach the output from running
dpkg --get-selections > /tmp/dpkg--get-selectionsin case it could be helpful.
[ snipped ]
I have a machine running Debian stable, and both "display" and "vncviewer" are working fine, but comparing the strace hasn't gotten me very far. I was wondering if anyone would have any hints, I've not much experience / understanding of X11 fonts.
Thanks, Peter.
[ Discussion continued (3 messages/7.60kB) ]
David Sugar [DSugar at boyslatinmd.com]
Mon, 14 Aug 2006 14:54:07 -0400
I am having an issue sending out e-mails from my linux box. Here is the issue:
I am getting the message
"host map: lookup (boyslatinmd.com): deffered"The linux machine is named reeses.boyslatinmd.com (10.1.10.65 internal address)
All mail for the boyslatinmd.com domain is handled by email.boyslatinmd.com (10.1.10.4 internal address)
I have tried setting up domain routing using sendmail and nothing seems to work using either the ip address or host name. I have even tried sending mail directly to the internal ip address and it doesn't work. Please help ASAP as I am trying to get a helpdesk server setup before the school year in about 2 weeks.
Thanks for the help.
David
David Sugar Administrative Technology Coordinator The Boys' Latin School of Maryland 822 West Lake Avenue Baltimore, MD 21211 410-377-5192 x.
[ Discussion continued (2 messages/3.66kB) ]
M.L. Morrison [mlmorrison2 at hotmail.com]
Tue, 17 Oct 2006 12:36:14 -0400
Hello,
I'm an admitted newbie and need help cresting my listing for ebay and similar type sales.
Creative writing that gets to the point and noticed is the main thing I need help with.
Please send info. Thanks so much!
[ Discussion continued (2 messages/1.80kB) ]
Ramanathan Muthaiah [rus.cahimb at gmail.com]
Wed, 18 Oct 2006 19:40:59 +0530
Hi Gang,
I could have Googl-ed to find plentiful of answers but decided otherwise to seek wisdom here.
My plans are to develop simple web-based application using Perl scripts and MySQL as back-end for data storage. Am planning to host the application using Apache http server. Am aware that Perl modules (DBI) are needed to interact with database. I have little knowledge of db design.
My question :
Am lost here, how to start this whole activity ? Reason is, am not comfortable in embedding the Perl scripts with HTML tags and regular functions for data management needs.
Intention is to keep the, scripts providing the front-end and back-end processing, separate and not mix all in the same script(s).
Looking forward to your suggestions.
/Ram
[ Discussion continued (7 messages/10.45kB) ]
Benjamin A. Okopnik [ben at linuxgazette.net]
Sun, 1 Oct 2006 13:21:10 -0400
Somebody took the audio from a Micr0s0ft Vista "we're so brilliant, we've invented all this new stuff" shill session and overlaid it on a video of themselves doing it all on their OS/X desktop - with a few additional twists. High amusement factor, including Bill G's 1977 "jailbird" photos.
http://video.google.com/videoplay?docid=-4134446112378047444
* Ben Okopnik * Editor-in-Chief, Linux Gazette * http://LinuxGazette.NET *
Mulyadi Santosa [mulyadi.santosa at gmail.com]
Fri, 20 Oct 2006 17:34:54 +0700
Dear gang...
I face a trouble here. Does anyone know the frame rate of VCD and DVD? Also, what is the fps (frame per second) of movie trailers spreading on Internet? What is the right way to calculate the fps?
My another problem is, suppose I have a video clip which has 30 fps. What should I do if I want to double the fps (i.e 60 fps)? Is there any tool available to do this?
Any hints would be greatly appreciated... Thanks in advance.
regards,
Mulyadi
[ Discussion continued (17 messages/23.68kB) ]
Ronald Nelson [rnelson at kennesaw.edu]
Thu, 28 Sep 2006 11:26:24 -0400
I am a member of the network group at a university in Atlanta Georgia, some time ago we had a minor DOS from off campus to our main DNS server, at that time we decided to only allow outside connections from our service provider (Peachnet) and block all others with our firewall. We seem now to have a problem with the root "EDU" servers.
;; AUTHORITY SECTION: edu. 172800 IN NS M3.NSTLD.COM. edu. 172800 IN NS A3.NSTLD.COM. edu. 172800 IN NS C3.NSTLD.COM. edu. 172800 IN NS D3.NSTLD.COM. edu. 172800 IN NS E3.NSTLD.COM. edu. 172800 IN NS G3.NSTLD.COM. edu. 172800 IN NS H3.NSTLD.COM. edu. 172800 IN NS L3.NSTLD.COM.They cannot resolve anything in our domain including our primary DNS server. Do we need to include these servers in our rule base ? How do we protect against a DOS attack on our DNS servers ?
thanks Ron Nelson 770 403-2135
[ Discussion continued (2 messages/6.04kB) ]
Talkback: Discuss this article with The Answer Gang
Kat likes to tell people she's one of the youngest people to have learned to program using punchcards on a mainframe (back in '83); but the truth is that since then, despite many hours in front of various computer screens, she's a computer user rather than a computer programmer.
When away from the keyboard, her hands have been found full of knitting needles, various pens, henna, red-hot welding tools, upholsterer's shears, and a pneumatic scaler.
[ In reference to More 2 Cent Tips! in LG#100 ]
Michael Pearl ([Michael.Pearl at semcoenergy.com])
Tue, 26 Sep 2006 16:27:16 -0400
I recently read a tip you submitted to linuxgazette.net back in December of 2003:
http://linuxgazette.net/100/lg_tips.html#tips.14
I'm using scponly for one of my users and recently he asked for the public key to bypass password prompt. Did you create the user as normal and then add them to scponly? Or did you add them using scponly's script (setup_chroot.sh) first?
- Michael Pearl - SEMCO Information Technology, Inc.
[ In reference to Automatic creation of an Impress presentation from a series of images in LG#116 ]
Karl-Heinz Herrmann ([kh1 at khherrmann.de])
Tue, 19 Sep 2006 23:13:28 +0200
Hi,
I've written the article: http://linuxgazette.net/116/herrmann.html
and was contacted by a reader a little while back, telling me it's not working for him. I could verify that with the current version of the perl module OpenOffice::OODoc the script indeed fails to create the slides in the odp file. It includes the pics -- you can extract them, but they are not shown on any slides, and there is only the one default slide.
I've no way of telling at what exact version my script breaks. The one its working with is version 1.309, currently CPAN is at 2.028 -- major revision 1 -> 2 is likely.
A rather simple change to the script fixes the problem:
just change: my $test= $document->appendElement ('//office:body',0,'draw:page');
into:
my $test= $document->appendElement ('//office:presentation',0,'draw:page');
and the script works again. A small caveat: I've sometimes problem with the slides beeing in the correct order (i.e. not the order in the inputfile), but I can't say under what conditions this can happen yet.
K.-H.
[ In reference to Booting Knoppix from a USB Pendrive via Floppy in LG#116 ]
Benjamin A. Okopnik ([ben at linuxgazette.net])
Fri, 1 Sep 2006 10:50:18 -0400
----- Forwarded message from Djordje Dragic <orange47 at gmail.com> -----
Hello Ben,
I have been trying to modify your script to work with latest Knoppix with no luck at all. File called 'linux' is too big to fit to floppy and besides, it seems that latest Knoppix cannot boot from diskette.
Please tell me, what is the latest Knoppix version that can work with your script? Could you please make a boot.img that would work with Knoppix V5.0.1 and put it online somewhere?
-- Best regards, Djordje mailto:orange47 at gmail.com ICQ#:308328689* Ben Okopnik * Editor-in-Chief, Linux Gazette * http://LinuxGazette.NET *
[ In reference to Build a Six-headed, Six-user Linux System in LG#124 ]
Amber Sanford ([amber at modernspaces.com])
Thu, 17 Aug 2006 11:41:25 -0700
Non-linux machines: any recommendations for this set-up though running on Windows XP?
Amber Sanford
[ In reference to With Knoppix at a HotSpot in LG#127 ]
Benjamin A. Okopnik ([ben at linuxgazette.net])
Sat, 5 Aug 2006 22:25:47 -0400
Hi, Jeff -
I'm going to CC the LG Answer Gang on my response, since this is pretty much the purpose of TAG; also, chances are that someone else may be able to cover any areas that I miss.
On Sat, Aug 05, 2006 at 07:02:23PM -0600, jeff at jeffroot.us wrote:
> Ben Okipnik;
"Okopnik", please.
> In the LG#127 article "With Knoppix at a HotSpot", you made the > comment: > > [ I do a lot of travelling, and connect to a wide variety of > strange WLANs. In my experience, at least, connecting to a > wireless LAN with Linux is usually just as simple as Edgar > describes. -- Ben ] > > Well, I have a very different experience. I have no trouble at all > connecting to a managed wifi network; at home or work, I just set > the ESSID and WEP key in /etc/network/interfaces, and "netscheme > work" does the rest. But this same machine has never managed to > connect to an open wifi network. > > Today, I visited my municipal wifi and tried to connect. I ran > Kismet to see that the ESSID was "OldTownWifi" and that wep and > encryption were both off. So I used iwconfig to set the essid and > managed mode, then ran dhclient. Nothing. No response from their > server at all. > > So how about this: you help me understand why I can see an open > hotspot with Kismet but can't seem to give the right incantation to > connect, and I'll write up a "dummy's guide" for LG.
Well, that sounds like a fair sort of deal... but I don't know that I can answer the question as posed. In my mind, at least, it comes down to "why doesn't dhcpclient work as it should?" - and I can't really tell you, since I don't use it. I suppose you could always take a look at your '/var/log/{daemon,kern}.log' or '/var/log/messages' and figure out where it's failing.
To be a bit more specific, I've tried using 'dhcpclient' in the past - I don't recall why - but it simply didn't work no matter what I tried, on a "known good" Ethernet connection which worked fine with 'pump' on a different machine. I pounded on the config file for a while, tried everything in the manpage - then gave up and installed 'pump'... and everything instantly started working - and if I recall correctly, it required no configuration on my part.
I've been using 'pump' ever since.
These days, I usually use the 'ifup/ifdown' front ends instead of using it directly (although sometimes I forget and use it directly; it works fine either way.) I never have to set the ESSID unless I'm trying to get onto a private network; the one time that I ended up wrestling with it turned out to be a case of solving the wrong problem - the responsible bit was a broken kernel module for my ipw2200, and not the client at all. As I've said, It Just Works.
[ ... ]
[ In reference to Creating a Rudimentary Kiosk System using FVWM in LG#128 ]
Thai Duong ([thaidn at yahoo.com])
Mon, 16 Oct 2006 12:04:56 -0700 (PDT)
Hi there,
I'm Thai from Vietnam. I just want to introduce you Kiosk Appliance (http://kiosk.rpath.org) which is a distro focusing on user's security and privacy inspired by your article "Creating a Rudimentary Kiosk System using FVWM". Using rPath technology, I provide 3 versions of Kiosk Appliance: a VMWare image, a LiveCD and an installable CD. Included is just enough rPath Linux and FVWM to run a locked down version of Firefox. Firefox is also pre-configured to automatically reset after each used so that personal information is never stored permanently. The current stable version is 0.2 which features:
1) Clear all personal data on exit;
2) Reset after a period of inactivity;
3) Disable form history;
4) Disable page caching;
...and much more, you can view more details at http://www.rpath.org/rbuilder/project/kiosk/release?id=5096
I've been working on Kiosk Appliance 0.3 which has many more features such as the ability to disable access to Firefox's internal URLs or dialogs, i.e about:* URL, the Firefox Preferences dialog...I hope K.A 0.3 to be released on the next few days. Many thanks to Thomas Adam for helping me to configure fvwm.
Any thought or suggestion? I'm really anxious to hear from you.
Best regards,
Thai Duong.
[ In reference to The Geekword Puzzle in LG#130 ]
Benjamin A. Okopnik ([ben at linuxgazette.net])
Mon, 4 Sep 2006 20:36:12 -0400
----- Forwarded message from Nguy?n Th?i Ng?c Duy <pclouds at gmail.com> -----
Date: Tue, 5 Sep 2006 07:10:05 +0700 From: Nguy?n Th?i Ng?c Duy <pclouds@gmail.com> To: TAG <tag@lists.linuxgazette.net> To: editor at linuxgazette.net Subject: Talkback:130/okopnik.htmlHi, Just want to shout out that the puzzle is great! I'd been a long time reader of linuxgazette (since 2001 I guess) although I lost interest in linuxgazette for a while (maybe because I'm no longer a linux newbie). Now I'm really looking for the next issue :D I noticed that the september's puzzle is harder than the august's. But I won't complain just because I couldn't complete the puzzle myself. I think puzzles are good for newbies too to examine what they learned and find some fun of it. For that, we need easier puzzles with common acronyms such as gnome, kde, and other common commands. So if you have enough manpower, two puzzles in an issue would be great (one for geeks and one for newbies). Cheers,
-- Duy
[ In reference to The Monthly Troubleshooter: Installing a Printer in LG#130 ]
Andrea Fleming ([peaclvr at gmail.com])
Thu, 5 Oct 2006 23:57:07 -0700
I have an hp 1200 and everytime I try to scan, the computer does not recognize the scanner within the printer. I have installed everything on the hp disk, but the computer doesn't see it. I can print just fine, but no scans. Help
[ In reference to Talkback in LG#130 ]
Chris Clayton ([chris_clayton at f1internet.com])
Sun, 8 Oct 2006 21:16:07 +0000
Hi,
Harring Figueiredo suggested:
Shouldn't the tests be if(buffer_... = NULL)? Probably a typo
You agreed.
In fact, shouldn't it be if(buffer_... == NULL)? (Comparison, not assignment).
Regards
Chris
[ In reference to Sharp does it once again: the SL-C3200 in LG#131 ]
jeff ([moe at blagblagblag.org])
Mon, 02 Oct 2006 15:44:34 -0300
I have a similar Zaurus 3100. I checked out both Openzaurus' OPIE & GPE. They were nice, but felt too similar to a palm pilot or a "plain" old PDA.
If you really want your Zaurus to be just like a "regular" PC running a full blown OS, OpenBSD on the Zaurus is great. It's the full OS installation with gcc, X windows, etc. Basically, if it compiles on OpenBSD it'll work on the Zaurus. Completely self-hosted development--no need for cross compilers on other boxes. Combined with a lightweight desktop such as blackbox it's amazing.
-Jeff
[ In reference to Mailbag in LG#131 ]
Ville ([v+tag at iki.fi])
Tue, 3 Oct 2006 10:27:59 +0300
On Fri, Jul 21, 2006 at 02:12:47PM +0300, I wrote:
> On Sun, Jul 16, 2006 at 09:26:31PM +0300, I wrote: > > > .... .... ...
Oh, the story hit the news . I'm sorry I wasted so many of your precious column inches. Hopefully someone finds the thread interesting or even useful.
[ In reference to Mailbag in LG#131 ]
Mike Orr ([sluggoster at gmail.com])
Mon, 9 Oct 2006 11:31:09 -0700
Just to clarify, Groups of Linux Users Everywhere (GLUE) was never related to Linux Gazette. Linux Gazette was hosted at SSC (linuxgazette.com) for a period of time. GLUE was a separate project (I think it was created by SSC but I don't know its origins). Because they were on the same server, SSC may have put it under the linuxgazette.com domain at some point, but it was never a part of our ezine. I made one version of GLUE and I thought it was under the linuxjournal.com domain, but that was so many years ago that I don't remember for sure.
-- Mike Orr <sluggoster at gmail.com>
[ In reference to 2-cent Tips in LG#131 ]
Richard Neill ([rn214 at hermes.cam.ac.uk])
Tue, 03 Oct 2006 00:56:29 +0100
Re file renaming (here, using "wavren", may I recommend installing the qmv and imv utilities. They are excellent. http://www.nongnu.org/renameutils/
imv filename
-> slightly faster than mv,
qmv
-> brings up an editor with columns for oldname, newname. Checks for
errors.
Best wishes,
Richard
[ In reference to On Qmail, Forged Mail, and SPF Records in LG#131 ]
Rodriguez, Candido ([Candido.Rodriguez at pearsoned.com])
Thu, 05 Oct 2006 14:16:03 -0400
Just I recommended using GMail instead of sendmail (because I read that GMAIL was secure).
The only problem is that I used my real name... ooops
Talkback: Discuss this article with The Answer Gang
Kat likes to tell people she's one of the youngest people to have learned to program using punchcards on a mainframe (back in '83); but the truth is that since then, despite many hours in front of various computer screens, she's a computer user rather than a computer programmer.
When away from the keyboard, her hands have been found full of knitting needles, various pens, henna, red-hot welding tools, upholsterer's shears, and a pneumatic scaler.
Please submit your News Bytes items in plain text; other formats may be rejected. A one- or two-paragraph summary plus a URL has a much higher chance of being published than an entire press release. Submit items to bytes@linuxgazette.net.
At its annual developer conference and Geek-fest/Love-in, the Fall Intel Developer Forum in San Francisco, Intel gathered partners and technologists to demonstrate it leadership in a plethora of technologies. Intel showed off new graphics engines that rivaled the gaming and video features of nVidia and ATI [recently bought by its chief competitor, AMD], LAN and WiFi solutions, virtualization, experiments in solar power and fuel cells, and numerous other forward-looking research projects. Intel used the IDF conference as a springboard for new initiatives and to explain its road map in more detail than in the past.
Intel committed itself and its partners to releasing quad-core x86 chips before the end of this year, moving up its earlier 2007 release date. And prototype systems were on hand as proof from many Intel partners. To be sure, these were modified dual socket mother boards that now can take dual core chips, but these showed that most modern OSes could take advantage of the extra cores and perform reasonable job scheduling without modification.
Code-named Kentfield, the first chips will still sport the "Core 2" branding, but with be 4 cores, as in 2 dual-core chips in a single socket package. This is partly done to improve the product yields [by about 20% over the 4 cores on the same silicon wafer, according to Intel CEO Paul Ortelli]. The follow-on chip will be called Clovertown and will be in the Xeon 5300 series, a quad-core variant of Woodcrest. First out of the chute are the quad-core products aimed at the desktop, followed server editions. [see photo here: ftp://download.intel.com/pressroom/kits/idffall_2006/Intel%20Core2%20Extreme%20Quad-Core%20microprocessor.jpg ]
Patrick Patla, director for AMD's Server and Workstation Business, in Austin, Texas, was quoted in eWeek calling the Intel Quad core a "Franken-quad," and less efficient with a single memory bus than the true quad-core chip that AMD will release in early 2007.
Gigabits and Gigahertz make for a massively parallel TeraScale processor. Intel's peak at the future of computing included an 80 core CPU. Seemingly one-upping the IBM-Sony Cell processor, 'Tera' is more of a design point to which Intel is applying its vast engineering resources.
Operating at 3.1 GHz, the goal of this multi-core experimental chip is to test interconnect strategies for rapidly moving terabytes of data from core to core and between cores and memory. This monster uses SRAM for speed and directly connect memory blocks to each processor. One possible design has entire RAM blocks in the core design. At 100 MB or more per core, that would be the entire system, or 8 terabytes, without a memory controller. Research efforts are aimed at trying to design DRAM to work in this manner so as to reduce power consumption. This could be a back door for Intel to re-enter the memory market.
Potential uses of the technology include high-performance devices to play photo-realistic games, share real-time video and do multimedia data mining. Intel Senior Fellow and Chief Technology Officer Justin Rattner said "When combined with our recent breakthroughs in silicon photonics, these experimental chips address the three major requirements for tera-scale computing - teraOPS of performance, terabytes-per-second of memory bandwidth, and terabits-per-second of I/O capacity."
For more info, see: www.intel.com/go/terascale and http://www.intel.com/pressroom/kits/events/idffall_2006/pdf/Intel%20Quad%20Core%20Processor%20Update%20%E2%80%93%20Sept.%202006.pdf
Black Duck Software, a provider of software compliance management solutions, announced in September that the Eclipse Foundation has purchased and deployed Black Duck's protexIP(TM)/development platform. Eclipse uses protexIP to review software submitted by committers and ensure it is in compliance with the specific software licensing requirements of the Eclipse Foundation.
Eclipse is a community of open source projects, each comprised of its own group of independent developers. The process of open source software development regularly involves the assembly of open source code with invented and reused components, and as a result, various licenses can govern various parts of an application. The open nature of Eclipse's projects intensifies the need for Eclipse to evaluate the copyrights governing their code bases.
"Companies worldwide are capitalizing on applications developed by the Eclipse community, and many software vendors sell products that are dependent on Eclipse," said Mike Milinkovich, executive director of Eclipse Foundation. "For that reason, it is absolutely vital for us to analyze our code before we release it to our community."
protexIP compares software code to the protexIP KnowledgeBase, the most complete database of information on open source software components, code and license obligations available today. It contains information on tens of thousands of open source projects from more than 2,000 sites worldwide; and more than 650 open source and commercial licenses.
"The Eclipse Foundation represents some of the most innovative work in application development today, as teams of developers are working together to create tools and frameworks that will help build better business applications," said Douglas A. Levin, CEO of Black Duck Software. "Eclipse's purchase of protexIP makes sense given the decentralized and very successful nature of the community's process. Eclipse now has greater certainty that the licenses governing the code are in order."
Terracotta, Inc., a leader in solutions for enterprise Java scalability, has announced availability of its Eclipse plug-in for Terracotta DSO, the company's enterprise-class JVM clustering technology. Bundled with Terracotta DSO, the new plug-in makes Terracotta's point-and-click clustering functionality available from within the Eclipse IDE and demonstrates the company's on-going commitment to open source integration.
"Eclipse has taken the developer world by storm and become one of the most popular open source IDEs for Java application development," said Ari Zilka, founder and chief technology officer at Terracotta. "Terracotta DSO already provides plug-in capacity and availability for Java applications running on two or more machines. This plug-in further simplifies the clustering process by automatically generating the necessary configuration files."
Terracotta DSO (Distributed Shared Objects) is a runtime solution that allows data to be shared across multiple JVMs without the need for proprietary APIs, custom code, databases, or message queues. With Terracotta DSO, objects can be clustered at runtime just by specifying them by name.
Typically, objects and data to be clustered, as well as classes to be instrumented, are manually declared in an XML configuration file. With the Eclipse plug-in, the declaration process is automated via graphical representation of classes and objects, which can be browsed and acted upon within the IDE. Right-clicking objects and selecting Terracotta options from the context menu automatically generates the XML configuration file. The point-and-click automation improves productivity and eliminates iterations.
To facilitate application testing, the Eclipse plug-in lets developers start and stop Terracotta servers and clients from within the Eclipse IDE. In addition, the plug-in provides a more intuitive, developer-friendly XML experience by replacing raw text with graphical representations of sub-declarations within the XML configuration file.
More information on Terracotta can be found at http://www.terracottatech.com.
IDG World Expo announced the successful completion of LinuxWorld Conference & Expo(®), held August 14-17, 2006. More than 10,000 participants from around the globe arrived at San Francisco's Moscone Center to examine the latest products and solutions, hear about emerging trends in the industry and experience new content. This year's event also paid tribute to the 15-year anniversary of the kernel with a spirited panel discussion entitled "Celebrating 20 Years of Linux" which highlighted several industry milestones while envisioning the future of Linux as if the year was 2011.
"We're extremely pleased with the results of LinuxWorld San Francisco," said Melinda Kendall, group vice president at IDG World Expo. "The exhibit hall was crowded with attendees, media coverage of the event was tremendous, the conference program was very well attended and several of our new programs including CIO Summit, Linux in the Channel Day and Healthcare Day were extremely well received."
Two key themes that resonated throughout the show were mobile Linux and virtualization. Motorola, Nokia and PalmSource, all new exhibitors at LinuxWorld San Francisco, touted their line of mobile Linux products and two keynote addresses focused on these themes: "Creating Must-Have Mobile Experiences With Linux," by Greg Bisio, Corporate Vice President of Motorola, and "Where Virtualization is Leading Your IT Department" by Peter Levine, CEO, XenSource. All of the keynote addresses from LinuxWorld can be downloaded from the LinuxWorld web site at www.linuxworldexpo.com.
Linus Torvalds announced the release of the 2.6.18 Linux kernel, following the previous stable kernel release by three months. With a hearty "Arrgh!," he said, "she's good to go, hoist anchor!", this being the second year in a row that a kernel release has coincided with 'Talk Like A Pirate Day' . "Here's some real booty for all you land-lubbers," Linus continued, "there's not too many changes, with t'bulk of the patch bein' defconfig updates, but the shortlog at the aft of this here email describes the details if you care, you scurvy dogs." In keeping with the theme, he signed the announcement, "Linus 'but you can call me Cap'n'".
The new kernel is here: http://www.kernel.org/pub/linux/kernel/v2.6/patch-2.6.18.gz
The newest stable release of Ubuntu, a popular Debian fork, was released on October 26th. Among the large number of additions and fixes included in the release, the development team seems especially proud of Tomboy, F-Spot, GNOME 2.16, Firefox 2.0, Evolution 2.8.0, and a bunch more. The server release is boasting a pre-release of the upcoming LTSP-5 (Linux Terminal Server Project), a popular server package that allows multiple thin client terminals to connect and use the server's software and hardware, only requiring the thin client to display in and out data.
Official announcement: https://wiki.ubuntu.com/EdgyAnnouncement?highlight=(edgy)
The first beta of Red Hat Enterprise Linux 5, based on Fedora 5 and some early features from Fedora 6, is available since September -- if you have a Red Hat Network account -- but some reviewers are not impressed. The package management tools don't work well with RH repositories, especially the version of 'yum' included, and the management tools for the brand new Xen virtualization hypervisor have received complaints. But that's why there are beta tests.
RHEL 5 is based primarily on the 2.6.18 Linux kernel. It comes in client and server versions with optional directories for additional functionality like virtualization and clustering.
You can access the beta with a temporary subscription to RHN at the RH evaluation page: http://www.redhat.com/rhel/details/eval/ [Does not reflect the RHEL 5 beta by this issue's publication]
MEPIS has released the SimplyMEPIS 6.0-1 DVD Edition: an update of SimplyMEPIS 6.0, MEPIS' first Ubuntu based edition released earlier this summer. The SimplyMEPIS 6.0-1 bootable DVD not only includes hundreds of bug and security fixes, but the 1,900 packages of the three SimplyMEPIS Extras CDs, as well. Also, this Mepis edition has been cover-mounted on the October 2006 issue of Linux Magazine from Linux New Media AG available at thousands of bookstores and newsstands worldwide including Borders, Barnes & Noble, Fry's, Micro Center, Chapters, WHSmith and Eason.
Warren Woodford of MEPIS said "We enjoyed working with Linux Magazine to produce this coordinated release. The cover-mounted DVD is convenient for those who do not have the opportunity or inclination to download ISO files and burn their own CDs or DVD. By featuring our new DVD, Linux Magazine will enable thousands of dissatisfied Windows users to try and, hopefully, switch to SimplyMEPIS."
The ISO image is available for download in the 'release' subdirectory at the MEPIS Subscriber's Site and at MEPIS public mirrors. Current users of SimplyMEPIS 6.0 do not need to install 6.0-1 but can update, as usual, through the Ubuntu and MEPIS package pools.
SimplyMEPIS is a full featured desktop solution that integrates the Linux OS with hundreds of popular application packages including KDE, OpenOffice, Amarok, Firefox, and Thunderbird. Satisfied users are encouraged to help offset the costs of development by making a contribution or a purchase at the MEPIS store at www.mepis.org/store.
A new release of BeleniX is available after some delay. This brings in several new features and software upgrades with more upgrades coming soon. The main points of this release are:
IPphone, Inc., developers of the free Gizmo Project Internet calling and IM software, today announced their All Calls Free calling plan is available for businesses worldwide, providing significant savings for small to mid-sized companies who want to enjoy free calling between co-workers and/or remote offices. Any business using the free Asterisk PBX software, or other premium PBX solutions, can also boost savings and workforce efficiency with access to high-end features previously only available to large businesses. Gizmo Project includes free conference calling, customizable voice-mail, Instant Messaging (IM) and a host of other convenient features. More information may be found at www.gizmoprojects.com/business.
Using Gizmo Project as an office softphone, workers can easily place PC-to-PC calls without the burden of special VoIP phones or the expense of traditional phone call charges. Further cost-savings occur when employees use Gizmo Project to make free calls to landlines and mobile phones of co-workers in 60 countries under the All Calls Free plan. The United States, Brazil, and most European and Asian countries are included and a full list of countries may be found at www.gizmoproject.com/free . Calls to other countries, or calls to people who do not have Gizmo Project are billed at the industry-low rates found at www.gizmoproject.com/rates.
"Large companies have been able to link offices together and use the Internet to save money on inter-company communications. Now small- and mid-size businesses can also have their employees call each other for free no matter where they are located," said Michael Robertson, chairman and CEO of SIPphone. "Companies running the Asterisk PBX or other premium PBXs also gain a new communications tool that can route calls through almost any network, allowing mobile workers to be reached anywhere as if they were physically in their office," Robertson added.
Any company can take advantage of the Gizmo Project for Business program. To get started, the free Gizmo Project software for Microsoft Windows, Apple Macintosh and Linux can be downloaded at http://www.gizmoproject.com/download.
Key features of Gizmo Project for business:
A wide variety of Asterisk PBXs, premium PBXs developed by such companies as Trixbox, SwitchVox, Epigy, webFones, and other SIP-based PBXs are supported by Gizmo Project. Workers can be called via their PBX or directly through the Gizmo network. More information about Asterisk support may be found at www.gizmoproject.com/asterisk . Specific information about setting up Asterisk for use with Gizmo Project is at www.gizmoproject.com/setupasterisk.
Qlusters, Inc., now has plug-ins that support FreeBSD(R), Solaris-x86(TM) and Solaris-Sparc(TM) operating systems. Until now, most systems management solutions have focused on supporting a limited number of operating systems. OpenQRM is the only open source systems management platform that provides IT professionals with a solution for managing Linux, Windows, FreeBSD and Solaris. These new tested and supported plug-ins enable the openQRM platform to quickly recognize and provision resources for FreeBSD and Solaris applications in addition to the existing support for Linux and Windows.
Additionally, these plugins provide:
Qlusters has taken steps to expand the project's presence over the last several months, recently announcing plug-ins for popular virtualization offerings from VMWare, Xen, QEMU and Linux VServer. In addition, openQRM has received accolades from leading industry sources. In July, SourceForge chose openQRM as its Project of the Month, while in August, Qlusters was named both a "Hot Open Source Company" by Red Herring as well as an "Open Source Company to Watch," by Network World.
openQRM is a leading open source systems management solution for managing enterprise data center virtual environments and for data center provisioning. openQRM has an open plug-in architecture that enables easy integration with existing data center applications, such as Nagios(TM) or VMWare(TM). For more information visit www.openqrm.org.
The user, Espressocode, is running an active, load-balanced DB2 database cluster between Toronto and San Francisco: a distance of almost 2,500 miles.
http://www.eweek.com/article2/0,1895,2020321,00.asp
Hacktivismo, an international group of tech-savvy human rights workers affiliated with the Cult of the Dead Cow [cDc], has released Torpark, an anonymous, portable Web browser based on Mozilla Firefox. Torpark comes pre-configured, requires no installation, can run off a USB memory stick, and leaves no tracks behind in the browser or computer. Torpark is a highly modified variant of Portable Firefox, that uses the TOR (The Onion Router) network to anonymize the connection between the user and the website that is being visited.
When a user logs onto the Internet, a unique IP address is assigned to manage the computer's identity. Each website the user visits can see and log the user's IP address. Hostile governments and data thieves can easily monitor this interaction to correlate activity and determine a user's identity.
Torpark causes the IP address seen by the website to change every few minutes to frustrate eavesdropping and mask the requesting source. For example, a user could be surfing the Internet from a home computer in Ghana, and it might appear to websites that the user was coming from a university computer in Germany or any other country with servers in the TOR network.
It is important to note that the data passing from the user's computer into the TOR network is encrypted. Therefore, the user's Internet Service Provider (ISP) cannot see the information that is passing through the Torpark browser, such as the websites visited, or posts the user might have made to a forum. The ISP can only see an encrypted connection to the TOR network.
But there are limitations to the anonymity: Torpark anonymizes the user's connection but not the data. Data traveling between the client and the TOR network is encrypted, but the data between the TOR network and websites is unencrypted. So a user should not use his/her username or password on websites that do not offer a secure login and session (noted by a golden padlock at the bottom of the Torpark browser screen).
Torpark is being released under the GNU General Public License and is dedicated to the Panchen Lama.
Download Torpark at: http://torpark.nfshost.com/download.html
IBM has inked a five-year deal with Magna Electronics, a Canadian firm that makes auto electronics. The goal of the partnership is to develop systems that protect drivers and mediate interactions with neighboring vehicles.
Part of IBM's Unstructured Information Management Architecture (UIMA), systems would be developed to analyze traffic patterns and real time performance data and react to potential problems. Beside collision avoidance, driver alertness would be monitored. Another possibility are headlights that dim for approaching cars. The technology would also include autonomic systems that could diagnose internal problems, since there would be no reliable benefit if the safety system failed when needed.
PNY Technologies, Inc. announced its latest MaxFile(TM) Attache(R), a USB 2.0 micro hard drive with 12GB of storage space. The extra-small drive includes a Migo(TM) backup and synchronization software download, so users can sync everything from their e-mail, documents, favorites and settings wherever they go.
"MaxFile Attache provides an ideal solution for users looking for ... cost-effective, high-capacity, compact, portable storage," said Dean Delserro, senior marketing manager, flash, for PNY Technologies. "Only slightly larger than a traditional USB flash drive, MaxFile's small form factor is ideal for anyone that needs to safely carry loads of important information with them. MaxFile Attache is a perfect choice for the user that requires more storage at a lower cost per megabyte than is traditionally available on a USB Flash drive, and still wants to be able to carry it in their pocket, purse, briefcase or backpack. ...Moreover, MaxFile Attache eliminates the need to travel with a bulky laptop, particularly at a time when airline travelers need to limit their carry-on items"
With 12GB of memory, MaxFile Attache can store thousands of documents, presentations, digital photos, and songs, games - or over 25 hours of video - and features a read and write speed of up to 11MB/sec. Moreover, the device features a durable, aluminum outer casing and is self-powered by a sturdy, USB connector.
PNY's MaxFile Attache is available starting in September from retailers and e-tailers with an MSRP of $169.
The X PRIZE Foundation announced that http://www.xprize.org will host the first-ever blog from space during Anousheh Ansari's historic flight to the International Space Station. The webpage was designed by the X PRIZE Foundation, a nonprofit prize institute in partnership with Prodea Systems, the home technology company that is sponsoring her journey.
"My ultimate goal is to bring this experience ... to more and more people and to inspire young woman and men to go into the fields related to space," said Anousheh Ansari during a recent interview from the Baikonur Cosmodrome in Kazakhstan. "I hope that thousands of individuals from around the world will visit the X PRIZE site to learn what its like to fly into orbit."
In addition to the first-ever blog from space, visitors will also read Ansari's life story as well as watch exclusive video and interviews from her training, preflight activities, launch and landing. Visitors will also have exclusive access to the first episodes of the X PRIZE Foundation Futurecast. This new podcast will feature visionaries and entrepreneurs from around the globe to talk about what the future holds for us. The first episodes, which can be found on the X PRIZE Foundation website and Apple's iTunes podcast directory, will be the first podcast from space.
An active proponent of world-changing technologies, Anousheh Ansari has been immersed in the space industry for years. Anousheh along with Amir Ansari, her brother-in-law, and co-founder and chief technical officer of Prodea Systems, provided the title sponsorship for the Ansari X PRIZE, a $10 million cash prize awarded to Burt Rutan in 2004, for the first non-governmental organization to launch a reusable manned spacecraft into space twice within two weeks. Anousheh Ansari is also a member of the X PRIZE Foundation Board of Trustees. Her philanthropic work through the X PRIZE Foundation has made her an integral figure in the development of the private spaceflight industry.
"The X PRIZE Foundation is very proud to host Anousheh's blog. We are a 21st century organization pushing the boundaries of technology," said Dr. Peter H. Diamandis, Chairman and CEO of the X PRIZE. "We thought blogging from space was the proper use of technology to reach today's youth. We hope millions will visit our website and learn about Anousheh's mission as well as the X PRIZE Cup in New Mexico, and our future X PRIZES for genome sequencing and hyper fuel-efficient automobiles."
On Monday September 18, 2006 Ansari is scheduled to blast off in a Russian Soyuz spacecraft from the Baikonur Cosmodrome in Kazakhstan, part of a crew-exchange flight to the International Space Station. Her journey will last for 10 days and will include a two day trip to the International Space Station on the Soyuz as well as numerous experiments and activities that she will film in order to create education programs upon landing.
In 2004, the $10 million Ansari X PRIZE proved that offering a prize is an effective, efficient and economic model for accelerating breakthroughs in science and technology. Based on that success, the X PRIZE Foundation is now expanding their efforts to offer more prizes in the space industry, as well as, in the areas of health, energy, transportation, and education.
Modern PCs spend most of the day idle. By using the Multiplied Linux Desktop Strategy, organisations can now leverage this unused computing power and connect up to 10 full-featured workstations to a SINGLE, shared SLED 10 or openSUSE 10.1 computer. For administrators, this means only one computer to install, configure, secure, backup and administer instead of 10. For users, this means a rich user experience that is indistinguishable from single-user computers for typical office applications. Ideal for Linux computer labs, Linux thin clients, Linux Internet cafés and Linux point-of-sale terminals, where users are in close physical proximity to the host machine.
Related links:
http://www.omni-ts.com/linux-desktop/linux-desktop-migration.html
Its been going on almost forever: Larry Wall's annual address to the Perl community called the State of the Onion. This summer was the tenth anniversary of the State of the Onion, and the full text of the 2006 address is available here: www.perl.com/pub/a/2006/09/21/onion.html.
The 1.0.beta2 release of Fulvio Ricciardi's ZeroShell Linux server distribution is now available for download. The main new feature of this release is the ability to use ZeroShell as a Captive Portal gateway, i.e. a WiFi hotspot with web-based authentication, similar to the ones used in hotel WiFi networks and public Wi-Fi hotspots.
Other features of ZeroShell include:
The next release will include support for QoS and bandwidth limiting.
ZeroShell is available as a LiveCD or a compact flash image for embedded
devices.
http://www.zeroshell.net/eng/
Eilat, Israel, October 23, 2006 - StartCom congratulates the Mozilla team on the successful release of the new Firefox 2.0 web browser. After one year of development since the last release, this award-winning and free web browser got even better than before. The new Firefox web browser is available for immediate download! In addition to that, StartCom is very pleased to announce the availability of the StartCom Certification Authority as an included and trusted instance for the issuance of digital certification in Mozilla software, including the new Firefox browser.
- The StartCom Certification Authority has matured a lot since the first announcement in February 2005 and offers today a range of products from free digital certification to PKI solutions for the corporate. The project started at http://cert.startcom.org with a limited wizard to create free digital certificates for web servers. Since then, StartCom developed various additional products for private and commercial use, underwent a third party audit and issued over 20,000 digital certificates. But today - with the release of the new Firefox web browser - marks the first time, that free digital certification (provided by StartCom) is supported and trusted by a major browser vendor with a significant market share. This makes it very easy for the subscribers of the certificates and the relaying parties (visitors) of digitally secured web sites (SSL) to use this free service! Also the signing and encryption of email is supported in the same manner which allows the protection of the identity and privacy of the user.
- This event is also an excellent opportunity to make another few announcements: StartSSL(™) is the new trade mark for products and solutions of the StartCom Certification Authority and is available at www.StartSSL.com. Additionally StartCom started the StartSSL™ Web-of-Trust (WoT), which is an attempt to create a community network of notaries and members, where notaries perform the verification of the fellow members. Please visit the new web sites for more information and everybody is invited to participate in the StartSSL(tm) WoT or make use of any of StartComs free or paid products and services.
Talkback: Discuss this article with The Answer Gang
Howard Dyckoff is a long term IT professional with primary experience at
Fortune 100 and 200 firms. Before his IT career, he worked for Aviation
Week and Space Technology magazine and before that used to edit SkyCom, a
newsletter for astronomers and rocketeers. He hails from the Republic of
Brooklyn [and Polytechnic Institute] and now, after several trips to
Himalayan mountain tops, resides in the SF Bay Area with a large book
collection and several pet rocks.
Benjamin A. Okopnik ([ben at linuxgazette.net])
Mon, 4 Sep 2006 15:27:16 -0400
A couple of years ago, I decided to stop wrestling with what I call "encoding craziness" for various bits of non-English text that I have scattered around my file system. Russian, for example, has at least four different encodings that I've run into - and guessing which one a given text file was written in was like a game of darts played in the dark. At 300 yards. With your hands tied behind you, so you had to use your toes. Oh, and while you were severely drunk on Stoli vodka. UTF-8 (Unicode) allowed me to, well, unify all of that into one single encoding that was readable without scrambling for whichever character set I needed (and may or may not have installed.) Better yet, Unicode usually displays just fine in HTML browsers - no special entity encoding is required.
For some reason, though, good converters appear to be something of a black art - and finding one that works, as opposed to all those that claim to work, was rather frustrating. Therefore, I decided to write one in my favorite language, Perl - only to find that the job has already been done for me, via the 'encoding' pragma. In other words, conversion from, say, KOI8-R to UTF-8 is no more complex than this:
# Convert and write to a file perl -Mencoding=koi8r,STDOUT,utf8 -pe0 < file.koi8r > file.utf8 # Or just display it in a pager: perl -Mencoding=koi8r,STDOUT,utf8 -pe0 < file.koi8r|lessIt is literally that simple. Pretty much every encoding you can imagine is available (see 'perldoc Encode::Supported' for the naming conventions and charsets). The conversion does not have to be to UTF-8 - it'll do any of the listed charsets - but why would you care?
# Print the Kanji for 'Rakuda' (Camel) from multibyte strings: perl -Mencoding=euc-jp,STDOUT,utf-8 -wle'print "Follow the \xF1\xD1\xF1\xCC!"' Follow the 駱駝! # Or you can do it in Hiragana, but using Unicode values instead: perl -Mencoding=shift-jis,STDOUT,utf8 -wle'print "Follow the \x{3089}\x{304F}\x{3060}!"' Follow the らくだ!* Ben Okopnik * Editor-in-Chief, Linux Gazette * http://LinuxGazette.NET *
Andrew Elian ([a_elian at sympatico.ca])
Wed, 25 Oct 2006 14:18:37 -0400
Hello.
Here's a quick tidbit to help the PS1 variable do the right thing depending on the terminal - X or otherwise. I've added these lines to my .bash_profile and found them useful:
case $TERM in xterm) export TERM=xterm-color export PROMPT_COMMAND='echo -ne "\033]0;${USER}:${PWD/#$HOME/~}\007"' export PS1="$ " ;; rxvt|Eterm) export PROMPT_COMMAND='echo -ne "\033]0;${USER}:${PWD/#$HOME/~}\007"' export PS1="$ " ;; linux) export PS1="\[\033[0;32m\]\u \[\033[1;32m\]\W]\[\033[0;32m\] " ;; esacSincerely, Andrew
Talkback: Discuss this article with The Answer Gang
Kat likes to tell people she's one of the youngest people to have learned to program using punchcards on a mainframe (back in '83); but the truth is that since then, despite many hours in front of various computer screens, she's a computer user rather than a computer programmer.
When away from the keyboard, her hands have been found full of knitting needles, various pens, henna, red-hot welding tools, upholsterer's shears, and a pneumatic scaler.
By Barrie Dempster and James Eaton-Lee
IPCop is a firewall for the Small Office/Home Office (SOHO) network, which is extremely easy to use. It provides most of the basic features that you would expect a modern firewall to have, and what is most important is that it sets this all up for you in a highly automated and simplified way. It's very easy to get an IPCop system up and running and takes hardly any time.
The four types of network interface—Green, Red, Blue, and Orange—supported by IPCop have differing levels of trust associated with them. Here is a simple table outlining what traffic is allowed to go to and from which interfaces. This table, and the knowledge contained within it, should form the basis of our planning when considering how many interfaces to use and what to use them for. This is basically the Traffic Flow diagram from the IPCop administrative guide.
Interface From | Interface To | Status | How To Access |
Red Red Red Red |
Firewall Orange Blue Green |
CLOSED CLOSED CLOSED CLOSED |
External Access Port Forwarding Port Forwarding / VPN Port Forwarding / VPN |
Orange Orange Orange Orange |
Firewall Red Blue Green |
CLOSED OPEN CLOSED CLOSED |
DMZ Pinholes DMZ Pinholes |
Blue Blue Blue Blue |
Firewall Red Orange Green |
CLOSED CLOSED CLOSED CLOSED |
Blue Access Blue Access Blue Access DMZ Pinholes / VPN |
Green Green Green Green |
Firewall Red Orange Blue |
OPEN OPEN OPEN OPEN |
|
In visualizing the way in which traffic goes through the IPCop firewall, we can see it as a sort of giant junction with a traffic cop (literally, an IP Cop—hence the name!) in the middle of it. When a car (in network parlance, a packet of data) reaches the crossroads, the cop decides in which direction the packet should go (based on the routing tables IPCop uses), and pushes it in the appropriate direction.
In the case of a Green client accessing the Internet, we can see from the previous table that this access is OPEN, so the cop allows the traffic through. In other instances, however, this might not be the case. If a Blue client tries to access a client on the Green segment, for instance, the cop might allow the traffic through if it comes over a VPN or through DMZ pinholes—but if a client on the Blue segment has neither of these things explicitly allowing the traffic, it is stopped. The car is pulled over, the occupants victims of some virtual time in the cells!
Note that (generally) when we illustrate IPCop Configurations, the Red interface is uppermost (North), the Orange interface is to the left (West), the Blue interface is to the right (East), and the Green interface is to the bottom (South).
As with many aspects of the behavior of the IPCop firewall, it
is possible to alter the behavior of the firewalling rules in order
to customize IPCop to meet a topology un-catered for by the default
rules. Within the context of the firewall rules, IPCop has had a
file since the 1.4-series release that allows users to specifically
add their own firewall rules (/etc/rc.d/rc.firewall.local
). Since
version 1.3, there have been iptables chains, CUSTOMINPUT,
CUSTOMFORWARD, etc., allowing iptables rules to be added manually.
Specifically using iptables is out of our scope here, but we
recommend that interested readers read the
Linux iptables HOWTO.
Our first topology exists as a drop-in replacement for the many NAT firewalls that exist in the market. In small offices and homes, solutions such as the embedded NAT firewalls sold by D-Link, Linksys, and friends are frequently deployed in order to provide small networks with cost-effective Internet access. Solutions such as Internet Connection Sharing, a combined NAT firewall, DNS Proxy, and DHCP Server, built into client editions of Windows since Windows 98, are also frequently used in order to allow one PC with a modem or network interface to act as a network gateway for other clients. For our purposes here, we will consider ICS, as such a topology with ICS is effectively a superset of the work required to replace a router such as a Linksys or NETGEAR model as mentioned previously. Our migration from one of these routers to IPCop would be identical save for the decommissioning of the ICS software on the client—if we remove the router, this is unnecessary and the router can be left configured as-is (and/or kept as a backup, or reused elsewhere) (See http://www.annoyances.org/exec/show/ics for more information on implementing (and consequently, decommissioning) ICS on different Windows versions). Such solutions, while cheap and convenient, are often not scalable or reliable, and provide poor security. They open workstations up to unnecessary security risks, provide limited throughput, and are often unreliable, requiring frequent reboots and locking up.
As with software firewalls, a network firewall is designed as a barrier in between your workstations and the Internet. By connecting one of your workstations directly to the Internet and using a solution like ICS, although you reduce the resources required to share the internet connection, you expose that workstation to unnecessary risk. There is also an obligation for that PC to be on all the time—compared to a low-end PC with no unnecessary components and a low-power PSU running IPCop, this may be noisier, and have a higher power consumption.
IPCop offers a cost-effective replacement in such situations, providing small businesses and home users with a powerful firewall without the need for over-complexity, and adding other features not present in embedded solutions or ICS, such as a customizable DHCP Server, Intrusion Detection, a Proxy Server, and so on.
Such a topology ensures that firewalling is done before data gets to clients, using a package designed to act as a network firewall, greatly increasing the quality of service to clients as well as the security that their network offers. In this situation, the components of IPCop in use would be:
In such a situation, a network administrator or consultant might also choose to enable any of the following pieces of functionality in order to enhance the services provided to the network:
Decommissioning of ICS in such a situation is quite simple—we would merely disable the ICS functionality, as depicted in the following screenshot (taken from the network connections property of the external, internet-facing ICS network interface). Removing ICS is as simple as deselecting the 'Allow other network users to connect through this computer's Internet connection' option. After we have done this, we should hit OK, reboot if asked to, and then we are free to disable and/or remove the external interface on the workstation (disable if we wish to leave a second network card in the machine or if it has two onboard cards, or remove if we are using an external modem or other piece of hardware we intend to remove or install in our IPCop host).
Firewall rules for this topology are simple; as the Green segment is automatically allowed to access resources on the Red interface, there is no topology-specific setup required in order to set this up. Another substantial benefit in deploying IPCop for such a small office situation is that in the event that the business is required to grow, the solution that it has is scalable. Such a business running a handful of Windows workstations in a workgroup may decide that a workgroup is insufficient for its needs and that it requires centralized management, file storage, and configuration.
IPCop, even in a pre-upgrade scenario like this, is advantageous simply because it provides a built-in, open upgrade path. There is no hardware or software upgrade required to move from simple NAT and DHCP to a network with several network segments, port forwarding, and a proxy server. If the Server already has several network cards (and with the price of these nowadays, there's no reason for it not to, if an expansion is anticipated), this can even be done with little or no noticeable interruption in service to existing clients.
In a small office situation with a growing company, the need for incoming email might force the activation of the Orange zone, and the deployment and installation of a mail server in this segment. Such a company might choose to keep its Desktop and Internal Server infrastructure within the Green network segment and put their server in the DMZ on a switch/hub, or simply attached to the Orange interface of the IPCop host using a crossover cable. As such systems are exposed to the Internet, this segmentation provides a considerable advantage by providing a 'stop line' past which it would be harder for an intruder to escalate his or her access to the network. Microsoft's Exchange mail server has for some time supported such a configuration through the use of the 'front end' and 'back end' exchange roles (although these roles will be deprecated with future Exchange releases). With a different network configuration however, such as Linux clients using a management system such as Novell's eDirectory or RedHat's Directory Server (RHDS), or a filtering appliance, a similar system with externally-facing SMTP servers (perhaps running the open-source MTA exim) would be equally beneficial.
In this topology, Clients are freely able to connect to the mail server (whether via POP, IMAP, RPC, or RPC over HTTP). In order for a mail server that exists as part of the network domain to authenticate to the directory server, we would also need to open the appropriate ports (contingent upon the directory provider) to the directory server using the DMZ Pinholes feature.
We also have a Port Forwarding rule set up from the external IP address of the IPCop firewall to port 25 on the mail server. This allows external mail servers to connect to the mail server in order to deliver email. In this topology, a compromise of the mail server (which in the Green segment could compromise the entire network segment) is controlled, as there is some level of protection provided by the firewall.
In such a topology, we use the following capabilities of the IPCop Firewall:
We might also choose to employ any of the following elements of functionality:
In a larger organization, or if the network above grew, we might choose to expand our network topology using one or more IPCop firewalls.
Several IPCop firewalls might be used by such an individual in order to separate several sites, or in order to further segregate one or more DMZs with physically distinct firewalls. It is also worth considering that IPCop is designed primarily for networks in which it is the only network firewall, in the Small and Medium Business, and Home/Home Office market. Although it is possible to set IPCop up in larger deployments, this is fairly rare, and there are other packages that are arguably more suited to such deployments. In such circumstances, the constraints of IPCop's network segmentation begin to be more burdensome than they are convenient, and the amount of work required to tailor IPCop to meet an organization's needs may exceed the work it would take to manually set up another firewall package to suit the same topology.
In this example, we will consider the broadest scope in which one IPCop box could be deployed, using all four network interfaces to protect a network with an internal (Green) network, an Internet or WAN connection (Red), a DMZ containing more than one Server (Orange), and a wireless segment (Blue) with an IPSec VPN system. In such a situation, we would almost certainly choose to deploy all of the higher-end features that IPCop contains, such as the Proxy Server and the Intrusion Detection System.
In this situation, the services we are providing for individual network interfaces are as follows: On the Red Interface, in addition to the default firewalling policy, we are invoking the Port Forwarding feature to allow connections to the mail server on port 25 in the DMZ, and also to port 443 (https) on the mail server in order to allow connections to the business webmail system. We are also allowing incoming IPSec connections to the IPCop firewall in order to allow remote access to staff who work remotely and to provide remote connectivity for support purposes for the IT Staff and third-party software and hardware vendors.
On the Blue interface, we are providing connectivity via an IPSec VPN for clients in order that they can access services run from Servers internally on the Green segment and DMZ segment. Vendors and visitors are allowed access to the Green segment through use of WPA in pre-shared key mode configured on the wireless access point.
[ When using pre-shared keys make sure you use the longest possible key combination straight from a good random source. Even WPA cannot guard against the brute-forcing of weak keys. This is also a fine reason for changing the pre-shared key periodically. -- René ]
WPA-PSK with solely an access point prevents access to the wireless segment and the Internet by unauthorized users, and is an adequate solution for most small and medium networks; use of a newer, WPA2-PSK-capable access point increases this security more for those without an access point or network infrastructure implementing RADIUS or Certificate Services. The firewalling policy and IPSec system ensures that visitors/vendors only have access to the Red zone (the Internet), and not to any of the resources on the network.
On the Orange interface, our pinholes allow the DMZ servers to connect to a directory server and Kerberos domain controller in the Green segment in order to authenticate users logging onto them via the company directory system. This ensures that the policy and configuration for these Servers is managed centrally, and that there are logs stored centrally for them, but the damage that could be caused by a compromise of these externally-facing services is greatly minimized, ensuring business security and regulatory compliance.
On the Green interface we allow connectivity to all interfaces; workstations and Servers within the Green segment are managed service workstations on which users do not have the necessary level of access to cause damage to the resources to which they have access.
[ Trojans have become very popular. This a good reason to think about restricting the networked machine access inside the Green network to proxies equipped with intrusion detection/prevention software. -- René ]
In such a situation, we are making use of the following IPCop features:
In a larger organization, we may also choose to use IPSec in site-to-site mode in order to link this office with one or more branch or parent offices. In this role, as in the role of a single network firewall, IPCop excels.
This article has been adapted from the book, "Configuring IPCop Firewalls: Closing Borders with Open Source" by Packt Publishing.
For further details please visit http://www.packtpub.com/ipcop/book/.
Talkback: Discuss this article with The Answer Gang
Barrie Dempster is currently employed as a Senior Security Consultant for NGS Software Ltd, a world-renowned security consultancy well known for their focus in enterprise-level application vulnerability research and database security. He has a background in Infrastructure and Information Security in a number of specialised environments, such as financial services institutions, telecommunications companies, call centres, and other organisations across multiple continents. Barrie has experience in the integration of network infrastructure and telecommunications systems requiring high-calibre secure design, testing and management. He has been involved in a variety of projects from the design and implementation of Internet banking systems to large-scale conferencing and telephony infrastructure, as well as penetration testing and other security assessments of business-critical infrastructure.
James Eaton-Lee works as a Consultant specializing in Infrastructure Security who has worked with clients ranging from small businesses with a handful of employees to multinational banks. He has a varied background, including experience working with IT in ISPs, manufacturing firms, and call centers. James has been involved in the integration of a range of systems, from analogue and VOIP telephony to NT and AD domains in mission-critical environments with thousands of hosts, as well as UNIX & LINUX servers in a variety of roles. James is a strong advocate of the use of appropriate technology, and the need to make technology more approachable and flexible for businesses of all sizes, but especially in the SME marketplace in which technology is often forgotten and avoided. James has been a strong believer in the relevancy and merit of Open Source and Free Software for a number of years and - wherever appropriate - uses it for himself and his clients, integrating it fluidly with other technologies.
By René Pfeiffer and pooz
Once upon not so very long ago, a lone Web server was in distress. Countless Web browsers had laid siege to its port. The bandwidth was exhausted; the CPUs were busy; the database was moaning. The head of the IT department approached Pooz and me, asking for an improvement. Upgrading either the hardware or the Internet connection was not an option, so we tried to find out what else we could do - caches to the rescue!
Every computer lives on caching. Your CPU has one, your disk drive, your operating system, your video card, and of course your Web browser. Caches are designed to keep a copy of data that is accessed often. The CPU caches can store instructions and data. Instead of accessing system memory to get the next instruction or piece of data, it retrieves it from the cache. The Web browser, however, is more interested in caching files such as images, cascading style sheets, documents, and the like. This speeds up Web surfing, because certain format elements appear quite frequently in Web pages. Rather than repeatedly downloading the same image or file, it re-uses items found in the cache. This is especially true if you look at generated pages from a content management system (CMS). Now, if we can find a way of telling the Web browser that its copy in the cache is valid, then we might save some of our bandwidth at the Web server. In case of our CMS, which is Typo3, we can also save both CPU time and database access, provided we can publish the expiration time of generated HTML documents as well. You can also insert an additional cache between Web browsers and your server, to reduce server requests still further. This cache is called a reverse proxy, sometimes called a gateway or surrogate cache. Classical proxies work for their clients, but a reverse proxy works for the server. This proxy also has a disk and memory cache, which can be used to offload static content from the Apache server. The following picture illustrates where the caches are and what they do.
The green lines mark cache hits. A cache hit is valid content (i.e., not expired) that is found in a cache and can be copied from there. Hits often don't reach the Web server. Some clients may ask the Web server if the content has already changed, but this short question doesn't generate much traffic. The Web server simply answers with a "HTTP/1.x 304 Not Modified" header and no additional data. The red lines mark cache misses. A miss occurs when the cache doesn't find the requested object and requests it from the target server. It is then copied to disk or memory and served to the client. Whenever another request is forwarded to the cache, a local copy is used as long as it is valid.
How does a cache know when to use a local copy and when to ask the server? Well, it depends. A browser cache looks for messages from the Web server. The server can use cache control headers to give advice. Let's look at an example. The request "GET http://www.luchs.at/linuxgazette/index.html HTTP/1.1" fetches a Web page whose HTTP headers look like this.
HTTP/1.x 200 OK Date: Tue, 03 Oct 2006 10:24:35 GMT Server: Apache Last-Modified: Mon, 02 Oct 2006 02:04:36 GMT Etag: "e324ac5-6d7d5500" Accept-Ranges: bytes Cache-Control: max-age=142800 Expires: Thu, 05 Oct 2006 02:04:36 GMT Vary: Accept-Encoding Content-Encoding: gzip Content-Length: 3028 Content-Type: text/html; charset=ISO-8859-1 X-Cache: MISS from bazaar.office.lan X-Cache-Lookup: MISS from bazaar.office.lan:3128 Via: 1.0 bazaar.office.lan:3128 (squid/2.6.STABLE1) Proxy-Connection: keep-alive
The server gives you the HTML document. In addition, the HTTP header contains the following fields:
Cache-Control: is better than Expires:, because the latter requires the machines to use a synchronised time source. Cache-Control: is more general but only valid for HTTP 1.1. There is some data included that wasn't sent by the Apache server. The last four HTTP header fields were inserted by the local Squid proxy in our office. It tells us that we made a cache miss.
Now let's turn to our servers, and see what we can configure there.
Even though the Cache-Control: is better, we first look at a way to generate an Expires: header for served content. The Apache Web server has a module for this called mod_expires. Most distributions include it in their Apache version. You can also compile it as a module and insert it after installing your own Apache. Either way, you now have the possibility to create Expires: headers, either in the global configuration or per virtual host. A sample setup could look like this (for Apache 2.0.x):
<IfModule mod_expires.c> ExpiresActive On ExpiresByType text/html "modification plus 3 days" ExpiresByType text/xml "modification plus 3 days" ExpiresByType image/gif "access plus 4 weeks" ExpiresByType image/jpg "access plus 4 weeks" ExpiresByType image/png "access plus 4 weeks" ExpiresByType video/quicktime "access plus 2 months" ExpiresByType audio/mpeg "access plus 2 months" ExpiresByType application/pdf "modification plus 2 months" ExpiresByType application/ps "modification plus 2 months" ExpiresByType application/xml "modification plus 2 weeks" </IfModule>
The first line activates the module. If you forget it, mod_expires won't do anything. The remaining lines set the expiration period per MIME type. mod_expires automatically calculates and inserts a Cache-Control: header as appropriate, which is nice. You can use either "modification plus ..." or "access plus ...". "modification" works only with files that Apache reads from disk. This means that you have to use "access" if you want to set expires headers for dynamically generated content, as well. Be careful! Although CGI scripts are required to set their own expiration date in the past to guarantee immediate reloads - some developers don't care. mod_expires will break badly written CGIs - harshly. Once, I spent an hour digging through horrible code to find out why a login script didn't work anymore. The developer had forgotten to set the expiration time correctly, so I adapted the server config for this particular virtual host as a workaround. Also, be sure to select suitable expiration periods. The above values are examples. You might have different requirements, depending on how frequently your content changes.
The Squid proxy has a metric ton of configuration directives. If you have no experience with a Squid proxy, this can seem a bit overwhelming, at first. Therefore, I present only a minimal config file, that does what we intend to do. The capabilities of Squid are worth a second look, though. I will assume we are running a Squid proxy 2.6.x installed from source and installed in /usr/local/squid/.
The reverse proxy assumes the place of the original Web server. It has to intercept every request, in order to compare it with its cache content. Let's assume we have two machines:
The local /usr/local/squid/etc/squid.conf defines what our Squid should do. We begin with the IP addresses, and tell it to listen for incoming requests on port 80.
http_port 172.16.23.43:80 vhost vport http_port 127.0.0.1:80 icp_port 0 cache_peer 172.16.23.42 parent 80 0 originserver default
ICP denotes the Internet Cache Protocol. We don't need it, and turn it off by using port 0. cache_peer tells our reverse proxy to forward every request it cannot handle to the Web server. Next, we have to define the access rules. In contrast to the situation with client proxies, a reverse proxy for a public Web server has to answer requests for everybody. Warning: This is a good reason not to mix forward and reverse proxies, or you will end up with an open proxy, which is a bad thing.
acl all src 0.0.0.0/0.0.0.0 acl manager proto cache_object acl localhost src 127.0.0.1/255.255.255.255 acl accel_hosts dst 172.16.23.42 172.16.23.43 http_access allow accel_hosts http_access allow manager localhost http_access deny manager http_access deny all deny_info http://www.example.net/ all
The acl lines define groups. accel_hosts are our two servers. http_access allow accel_hosts allows everyone to access these servers. The other lines are from the Squid default configuration, and deactivate the cache manager URL. We don't need this right now. The last line is a safeguard against unwanted error pages. (Squid has a set of its own: they differ from the Apache error pages.) Users are sent to our front page, in case there are any troubles with requests. You can view the full squid.conf seperately, because the rest "only" deals with the cache setup and tuning. (Take care: the config is taken from a server with 2 GB RAM and lots of disks. You might want to reduce the cache memory size.) As I said, Squid is capable of doing many wonderful things. As soon as Squid is up and running, we are ready to send our users to the reverse proxy.
You have to be careful, if you rely on accurate statistics from your Web server logs. A good deal of HTTP requests will be intercepted by the Squid reverse proxy. This means that the Apache server sees fewer requests, and that they originate from the IP address of the proxy server. That was the very idea of our setup. You can collect Apache-like logs on Squid, if you change the log format.
logformat combined %{Host}>h %>a %ui %un [%tl] "%rm %ru HTTP/%rv" %Hs %<st "%{Referer}>h" "%{User-Agent}>h" %Ss:%Sh logformat vcombined %{Host}>h %>a %ui %un [%tl] "%rm %ru HTTP/%rv" %Hs %<st "%{Referer}>h" "%{User-Agent}>h" access_log /var/log/squid/access.log combined access_log /var/log/squid/vaccess.log vcombined
In order to incorporate them into your log analysis, you have to copy the logs from the reverse proxy and merge them with your Apache logs. As soon as your Web setup uses a proxy or even load balancing techniques, maintaining accurate statistics gets quite tricky.
After you have configured Apache and Squid, you are ready to test everything. Start with a single virtual host reserved for testing purposes. Change the DNS records to point to the reverse proxy machine. Check the logs. Surf around. Analyse the headers. When you are confident, move the other DNS records. A side note for debugging: You can force a "real" reload in Internet Explorer and Mozilla Firefox if you hold down the shift key while pressing the "Reload" button. An ordinary reload may just hit the local cache, now.
You won't get a good impression of what's changed, just by looking at the logs. I recommend a monitoring system with statistics, Munin, for example, so that you can graphically see what your servers are doing. I have two graphs from testing servers, taken during a load simulation.
In the first graph, red shows cache misses; green shows cache hits. Below, you can see the hits on the Apache server behind the reverse proxy. The shape of the graphs is similar, but keep in mind that all requests shown in green on the Squid server never reach the Apache, and thus reduce the load. If you compare the results, you will see that only one in two of the requests gets through to the Apache server.
Now, you know what you can achieve using the resources of Apache and Squid. Our Web server handled the traffic spikes well, the CPU load went down by 50%, and all the surfers were happy again. You can do a lot more, if you use multiple Internet connections and load balancing on the firewall or your router. Fortunately, we didn't need to do that in our case.
No animals or software were harmed while preparing this article. You might wish to take a look at the following tools and articles; they may just save your Web server.
Talkback: Discuss this article with The Answer Gang
René was born in the year of Atari's founding and the release of the game Pong. Since his early youth he started taking things apart to see how they work. He couldn't even pass construction sites without looking for electrical wires that might seem interesting. The interest in computing began when his grandfather bought him a 4-bit microcontroller with 256 byte RAM and a 4096 byte operating system, forcing him to learn assembler before any other language.
After finishing school he went to university in order to study physics. He then collected experiences with a C64, a C128, two Amigas, DEC's Ultrix, OpenVMS and finally GNU/Linux on a PC in 1997. He is using Linux since this day and still likes to take things apart und put them together again. Freedom of tinkering brought him close to the Free Software movement, where he puts some effort into the right to understand how things work. He is also involved with civil liberty groups focusing on digital rights.
Since 1999 he is offering his skills as a freelancer. His main activities include system/network administration, scripting and consulting. In 2001 he started to give lectures on computer security at the Technikum Wien. Apart from staring into computer monitors, inspecting hardware and talking to network equipment he is fond of scuba diving, writing, or photographing with his digital camera. He would like to have a go at storytelling and roleplaying again as soon as he finds some more spare time on his backup devices.
pooz is a system administrator/Web application hacker working in Vienna, Austria. Free/Open Source software has been his tool of choice since early 90s.
But you know nowadays It's the old man He's got all the money And a young man ain't got nothin' in the world these days - The Who `Young Man Blues'
Laptops are fine, if you can afford them. Even if you cannot afford one, often enough the situation arises that computer work has to be done at home - whether you are studying, doing business charts, programming, Web designing....
This article presents a reliable and truly cheap alternative to buying a laptop for keeping up with work/study demands.
My situation forced me to prepare work at home, but a laptop was far too expensive. Perhaps due to the popularity of laptops, it is actually much cheaper to buy a second-hand PC. A portable hard drive would have been a nice thing, but requires synchronising directory content between three locations - the portable drive, the computers at home, and those at work.
The solution presented here is possibly the cheapest possible and based on an inexpensive USB stick; the required `intelligence' to reliably synchronise directories is provided by a set of scripts. Unlike hardware-based solutions, this works across different distributions and even operating systems. The principle is simple, but there are a number of subtle problems that have been ironed out during three years of use (with my second-hand PC) on a daily basis.
The USB directory synchroniser consists of the following:
This all is too much for one article; hence, the basic functionality (first two items) is covered by this article, the remaining items follow in a subsequent part.
This part starts with a reduced, but fully functional `bare-bones' script, to explain core functionality, how to set up, and how to hack it. When the full script follows in part II, its functionality will then be obvious - it is not more complex, but just has more bells and whistles.
If you are a Perl hacker, you should feel at liberty to skim through the following sections and move on to the more complex version.
The daily archive has a fixed name; mine is called `actual.2' and is located in $flashdir:
$flashdir = ${ENV}{'VOL'} || "/vol/flash"; $tarfile = "$flashdir/actual.2";
The first line sets the mount point, which can be overridden using the VOL environment variable. (For example, you could say export VOL=/vol/floppy and store the archive on a floppy disk.)
Mounting happens as with any other filesystem (e.g., hard drive partitions), but with one important difference: the system may be able to tell when a USB stick is inserted, but it can not reliably tell or sync any unflushed file system buffers when the stick is pulled out. As convenient as the `storage media' icons are on Gnome/KDE desktops, I have found them unsatisfactory for the purpose of archiving: more than once I have ended up with corrupt, half-full archives, this way. Therefore, a different alternative is presented here; and there is additional provision to make sure that the archive does indeed end up on the USB stick.
The safest bet, but tedious, is to always mount(8) / umount(8) the stick:
mount -t vfat /dev/sda1 /vol/flash
USB uses the SCSI subsystem; therefore, in most cases the first USB stick stuck into your computer will appear as /dev/sda1; check this via dmesg and/or in /var/log/messages.
A better alternative to manual mounting is the automounter, which auto-mounts a directory the first time one tries to access (read) it; and umounts automatically after a fixed timeout.
Most, if not all, systems come with the automounter per default (man autofs(8)); it is started at boot-time via /etc/init.d/autofs. The automounter is configured via so-called `map' files, which designate the mapping from hardware devices (such as /dev/sda1) to mountpoints.
The first file to consider is /etc/auto.master which contains but one line: /vol /etc/auto.vol --timeout=20This instructs autofs to consult the file auto.vol in all matters of volatile media. The file /etc/auto.vol then contains the actual map; the relevant entry is the following:
flash -fstype=auto,dirsync,user,umask=000,exec,noatime,shortname=winnt :/dev/sda1
The above line can be parsed into three distinct sections: mountpoint under /vol, mount options, and the device to be mounted. (To create the all mountpoints, use mkdir -vp /vol/{flash,floppy,cdrom} under bash; see the file for the configuration of floppy/cdrom). The fstype is automatically detected; but I can only recommend to stick with vfat: using ext2/3 will trigger unpleasant fsck-ing at boot time. Important for electronic filesystems such as USB sticks are dirsync and noatime, as these reduce the number of device accesses (limited by component lifetime). For the remaining options, see mount(8).
After creating and editing these two files (with the correct settings for your stick), you should be able to do a `/etc/init.d/autofs restart' and see contents of your flash directory via `ls -l /vol/flash'. If so, you are ready to experiment with the script and its configuration file (if it is copied into /etc/pack.list, make sure that you have write access). The automounter should best be enabled at boot time (Fedora/RH: chkconfig --list autofs, Debian: update-rc.d -n autofs defaults).
The script works very well with automounting: before doing anything, it will first try to access the $flashdir. If it cannot access, after several repeated attempts it will give up with a warning message.
The remainder of the article now describes how the script works. (You can also see debug output by setting the shell variable debug to > 4.)
The main, bare-bones script, which we will now take apart, uses Perl built-ins for things that have to run fast (such as traversing directories and creating lists), and calls other programs (such as tar) for everything else.
The following requirements made a script inevitable:
An important program called by this script is (g)dialog, which provides GUI dialogs (example error box in dialog and in gdialog):
$dialog = ($ENV{'DISPLAY'} && (qx#which gdialog 2>/dev/null#))? "gdialog" : "dialog";
On the console, dialog is called; under X, gdialog is used for the same purposes. The qx/.../ statement chooses dialog in both cases if gdialog is not available. On a Debian system, you can install both via apt-get install dialog zenity (gdialog is in the zenity package); similar for other distros.
For the rest of the user interaction, the pager $less is used, and we have the logging function:
sub log { printf STDERR "\033[1;34;7m@_\033[m\n"; }
The funny digits are ANSI escape sequences to colour the output (a screenshot is here); good examples of using these can be found elsewhere in LG. Since Perl already has a log(-arithm) function, we need to make clear which log we want; hence above function will be invoked as ::log() (which is an abbreviation for main::log()).
The bare-bones script can already do the following:
In increasing order of complexity, we will consider (1) listing, (2) un-packing, (3) packing, and (4) building the file list.
This is easy, $see is true when the --list option is set:
if ( $see ) { build_list(); system "$less $list"; }The build_list() function hides all the complexity of parsing, collating, and checking; it is discussed later. The third line calls our pager `$less' (PAGER environment variable) on the newly created $list.
Un-packing is done by invoking the script with the --unpack option. On my system, I have found it useful to use the following alias (the pack script is in ~/bin and ~/.bashrc contains: export PATH=~/bin:$PATH):
alias unpack="pack --unpack"Un-packing is described by the following pseudo-code:
Step (1) is important: if you accidentally left an archive on the stick and then, not knowingly, unpack it some days later, it will silently overwrite your files with the older files stored in the archive. This has happened to me several times, but the remedy is both simple and very efficient.
Tar has a rich set of features, stemming from the old days when it was used as Tape ARchiver. That is, it supports storing archives on several different (tape) volumes and with the --label=myLabel option, these volume archives can be given individual names. You can view the volume name of an archive by using -t to list an archive, the volume label appears in the first line. So, in the present case, the volume names are simply set to the fully-qualified hostname(1) of the system. (This assumes that different PCs have different hostname.)
The inverse operation to unpacking is simple, all the complexity is in build_list():
::log "Creating backup archive file ..."; system "tar -jcpPvf $tarfile --label $hostname --files-from=$list 2>$log"; if ($? != 0) { unlink $tarfile; error "Tar Problem!\nDeleting archive $tarfile"; } ::log "Syncing ..."; system "sync; sync;"; ::log "Testing file integrity:"; system "bzip2 -tvv $tarfile";
The $tarfile is created with the files from $list, the volume label is set to the $hostname. In case of error, the archive is deleted (unlink-ed) and an error window will pop up. Otherwise, sync(1) is called twice to flush filesystem buffers. The subsequent file integrity test provides additional protection against removing the USB stick before all of the data has been safely transferred. (This is a common problem with removable media, and it is good to be cautious, here.)
The build_list() routine adds intelligent functionality around the tar program: it processes the contents of a configuration file in such a manner that only files changed in the last few days are passed on to tar, without adding unwanted subdirectories, but with full expansion of symlinks.
The complexity that this requires is hidden behind recursion (a function calling itself again), which is a good working principle, since directories are themselves recursive: a sub-sub-directory is a directory in a directory is a directory in a directory ... :-)
Let's look at the main loop, which parses the file $config.
while(<>) { strip_comments_and_leading_whitespace(); next if $line_is_empty; my @arr = split; # put all single words into @arr if ($arr[0] = m/<\s*rec\s*>/i) { # line starts with <REC> shift @arr; getLinkedFiles(@arr); } elsif ($arr[0] = m/<\s*link\s*>/i) { # line starts with <LINK> shift @arr; readLink(@arr); } else { # this is a `normal' line foreach (@arr) { if (m#[{*]#) { # e.g. /home/gerrit/{.bash*,.exrc,bin/*} let_bash_expand_entry(); } elsif ( -d ) { # a single directory: traverse getLinkedFiles($_); } else { # a file or a link: just print to list printf "$_\n"; } } } }
The configuration file contains file/directory names on each line; bash shell-globbing syntax is allowed. (In this case, bash is actually used to expand entries.) Configuration lines starting with <LINK> mean "I want this symlink to be followed, but I don't care about deep directory traversal". You get the full works of directory traversal plus symlink expansion with deep recursion by using the <REC> tag. Here is an example configuration file which illustrates the principle.
The while() loop iterates over each line of the $config file. The split statement separates single words into the array @arr. Hence, when a line starts with a <LINK> or <REC> tag, we need to remove that tag from @arr before passing it as argument to one of the functions; this is handled by the shift statement. All output from the invoked subroutines is redirected to the TMPLIST temporary file, containing the expanded list of files, resolved symlinks, and traversed directories.
We now briefly look at getLinkedFiles(), which maintains a hash-list %KNOWN to avoid cycles and proceeds differently by these cases:
The readlink_recurse() routine in turn calls getLinkedFiles() to resolve new file entries; it contains a logic to avoid getting lost in symlink loops. This can be a bad trap otherwise (try e.g. this: ln -s a b; ln -s b a; namei a).
To pick those files that have changed during the last $days days, build_list() uses a simple trick: it rewinds the start time $^T (Perl special variable) of the script by this amount in seconds. This means that, once file modification times are tested, Perl already thinks it is executing $days back in history:
$^T -= $days * 24 * 3600;The following is the final processing of the file list, refining TMPLIST into LIST:
while(<TMPLIST>) { chomp; s#/[^/]+/(\.\.)+/(\S+)#/$2#g; error "FATAL:\n \"$_\"\n--$!\n" unless -e $_; next if ( (-M ) >= 0 ); print LIST "$_\n" unless $KNOWN{$_}++; }
After chomp()ing the `\n', pathnames containing `..' are normalised: for example, /usr/local/lib/wx/../../bin/wx2 is reduced to /usr/local/bin/wx2. If the file does not exist (e.g., due to a broken symlink), an error message is produced. (An example output is here.)
The file age test `-M' returns the time since the file ($_) was last modified; a negative modification age means that the file was modified in the future. Relative to the rewound start time $^T, this means: all files created during the last $days until now will appear with a negative modification time; and thus be added to the list. Last, the filename is printed into the LIST unless it has been encountered before (indicated by a non-null entry in the hashlist %KNOWN).
That's it, the list is built, the file closed, and it's passed on to tar to create the archive.
This part of the article has described how to use a USB stick for daily directory synchronisation between two non-networked computers. The principle has been presented on a scaled-down (but fully functional) script. The next part will introduce additional functionality that simplifies the usage and makes it robust for many day-to-day situations.
In summary, using USB sticks for synchronising directories between home and workplace is an efficient, workable, and very cost-effective solution. Some companies now even give away USB sticks for free, thereby contributing to a significant reduction in Total Cost of Ownership (TCO) of the solution presented in this article.
Talkback: Discuss this article with The Answer Gang
The situation can typically arise at an educational institution or in a training environment where a general purpose (Linux-based or not) computer laboratory is set up to allow student users access to a lab machines, but not super user (root) access. The machines will typically be set up via the BIOS to boot to the hard drive first, so that the systems cannot be subverted by inserting a bootable CD ROM or bootable floppy. Of course, a BIOS password is required in order to change BIOS settings. However, there will often exist a group, e.g. an advanced operating systems class, where Linux is required and super user access is essential. It is undesirable to give this group super user access to the regular system or BIOS password access.
We would like to at least achieve a set-up where the normal users of the computer laboratory can conduct business as before while the advanced users can boot into a separate configuration where they have super user privileges. We do not want the advanced users to require root access to the the configuration for normal users, because the advanced users will be routinely modifying and occasionally breaking the system. In summary, the goal is that
The proposed solution is to be implemented by the system administrator, who has super user access, knowledge of needed passwords, etc. Once the solutions are implemented, the system administrator can once again go hide in his/her cave. We start by modifying the BIOS to boot the system with a bootable CD. For our case study, we chose Xubuntu-6.06.1 as our bootable CD. The 6.10 version is not yet available at this writing. Xubuntu, not using KDS or Gnome, presumably has somewhat smaller footprint than the equivalent Ubuntu or Kubuntu versions. This could be important since our ultimate installation target is a USB micro drive (USB-MD). It may also run a bit faster on that hardware target than its more top heavy cousins, which is also an issue with today's 3600 RPM USB-MD.
Our solution assumes that the machine to be used allows booting from the MBR of the USB-MD when the BIOS is appropriately configured. In this case, we place a master boot record (MBR) on the USB-MD. The advanced user can then insert this USB-MD into a laboratory computer and reboot the system into Xubuntu. A normal user has no such USB-MD and when a reboot is attempted the system will boot first from the hard drive MBR, hence into the normal operating system configuration.
Some Advantages:The first disadvantage is ameliorated if the laboratory is a teaching laboratory holding no sensitive data, essentially just information (e.g. configuration, applications) that would be inconvenient to reconstruct from the backup media. Further, if malice is not present, there is little likelihood that the machines would be compromised. It is assumed, for example, that the Xubuntu installation on the USB-MD would be configured so that it would not automatically mount the hard drive partitions at bootup. So the advanced user would have to overtly violate the laboratory protocol to compromise the normal system. The second disadvantage has various cures. We already mentioned overt modifications such as using a different /etc/X11/xorg.conf for different target machines. Perhaps more attractive would be to employ a USB-MD with large enough capacity to create two Xubuntu installations - one for the laboratory environment and the other for home use. Whichever one was booted should also mount the other, for easy data transfer and backup. Alternatively, the two separate installations could share a common /home directory in its own partition. At this writing, for this author's purposes, a 10 GB USB-MD would appear to fit that purpose nicely. Maybe one will appear at birthday time.
The bootable CD with a companion USB stick for storage is also a potential solution, but doesn't work as well for the problem space envisioned. For example, if kernel recompilation and redeployment is an area of investigation (e.g. investigating the implementation of new system calls) or if changing major portions of the root file system are fair game, one either burns a new CD to try the results or moves more and more of the system to the USB stick. Because the USB has a more limited capacity, the USB-MD solution rather naturally evolves.
Another possibility is just to make the laboratory machine dual/multi boot, so it can boot into the 'normal user' Linux or the 'advanced-user' Linux. There are several serious drawbacks. First, the advanced user cannot change the USB-MD boot configuration for grub (or lilo) because that resides on the hard drive and requires super user privileges. Of course, the advanced user merely needs to boot the USB-MD and mount the hard drive to make such changes. However, our protocol is to avoid having the advanced user mount the hard drives, to prevent inadvertent damage. Second, the normal user can boot into the 'advanced user' Linux. This would likely be unsuccessful, but is still undesirable.
Perhaps yet another solution involves virtualization. This may ultimately be viable, but the author's current understanding of one such approach, Xen Virtualization, indicates that within the sandbox of a sub domain the user may be given super user privileges, but such tasks as kernel recompilation and redeployment will require super user access at the hypervisor level.
The case study was conducted in the Computer Science department at Malaspina University-College in Nanaimo, BC, Canada. The Linux box used was one from a recently configured laboratory. The USB-MD was the property of the author.
The salient hardware details are these:
Linux box vendor -> recent Seanix purchase BIOS vendor -> Intel BIOS version -> NT94510J.86A.3813.2006.0411.1332 USB-MD vendor -> T-one USB-MD capacity -> 5 GB
The basic steps here are
[Note: Xubuntu should also provide an option to install to the hard drive, which we do not want here.]
When the USB-MD installation has completed, choose to reboot the system, removing the CD at the appropriate time (typically, it will be ejected). At the very start of reboot, enter the BIOS to ensure that the system will reboot from the USB-MD first. The reboot may take several tries in order to get the BIOS settings just right. For the Intel BIOS referenced above, the choices in the BOOT sub menu were these. Not all are salient, but the following choices worked:
boot Menu Type | Advance | |
Boot Drive Order | USB-MD CD Floppy Hard Drive |
|
Boot to Optical Devices | Enable | |
Boot to Removable Devices | Enable | |
Boot to Network Devices | Disable | |
USB Boot | Enable | |
Zip Emulation Type | Hard Disk | |
Boot USB Devices First | Enable | |
USB Mass Storage Emulation Type |
All Fixed Disk |
Different BIOS vendors will provide different choices and wording, but the above gives the idea.
The file /boot/grub/menu.lst installed by Xubuntu on the USB-MD for booting the Seanix machine was as follows (except for editing of the title line):
title Xubuntu 6.06.1 on USB_MD root (hd0,0) kernel /boot/vmlinuz-2.6.15-26-386 root=/dev/sdb1 ro quiet splash initrd /boot/initrd.img-2.6.15-26-386 savedefault boot
The Seanix hard drive (scsi) was seen as /dev/sda. It was also recognized first in the BIOS bootup, whether or not the hard drive MBR was chosen to boot Linux. This could then be used to boot up into the USB-MD via the MBR on the USB-MD.
The author's home machine already had a version of Kubuntu (one of Xubuntu's heavier cousins) installed. It differs from the laboratory Seanix machines in these important aspects:
These differences needed to be addressed and were dealt with as follows:
The new stanza for the USB-MD interjected into the /boot/grub/menu.lst on the hard drive was:
title Xubuntu 6.06.1 on USB_MD root (hd1,0) kernel /boot/vmlinuz-2.6.15-26-386 root=/dev/sda1 ro quiet splash initrd /boot/initrd.img-2.6.15-26-386 savedefault boot
With this new stanza and the proper replacement for /etc/X11/xorg.conf, the USB-MD could be booted from the home machine. Note that this is the dual/boot solution rejected earlier for the laboratory machines. However, we assume that the home machine does not need to protect different user groups from each other.
The bootable USB-MD is an acceptable solution where general purpose labs can provide convenient access to
while keeping a hardware separation between the two groups. Once the system is implemented, it requires virtually no extra care or feeding by the system administrator.
The solution does compromise security somewhat, which is acceptable for the typical general purpose teaching/training laboratory where sensitive data are not stored and where the advanced user is trustworthy.
Talkback: Discuss this article with The Answer Gang
Dr. Richard Sevenich, Professor Emeritus of Computer Science, retired from Eastern Washington University early in 2006. However, intending to remain active, he is currently teaching Computer Science on a part time basis at Malaspina University-College, in Nanaimo, BC, Canada. His teaching and research interests lie in the areas of Operating Systems, Compiler Design, and industrial control via state languages. He writes occasional articles on these and other topics. He has used Linux since the summer of 1994.
By Vishnu Ram V
Occasionally system administrators can run into situations where the conventional way of troubleshooting an issue may not yield results. The conventional way means using test scripts, observing the log files, tweaking configuration settings and the like. In such cases, one will have to dig deeper into the internals of the server. Strace proves to be a valuable tool in such situations. The strace utility intercepts and records the system calls which are made by a process and the signals which are received by a process. This article shows how to use strace to troubleshoot Apache by illustrating a real-world issue. I will begin with the problem statement and then move on to describe the initial troubleshooting attempts. The inner workings of the Apache server are briefly explained just before examining the strace results. Various options of strace and its internal working is beyond the scope of this article, so please refer to the man pages for the same.
ISSUE: Can't send mail from web pages using PHP 'mail()'
function.
OS: RedHat
SMTP Server: Sendmail
Web Server: Apache/1.3.22 - It's a virtual hosting
environment, there are many sites hosted in the server.
PHP version: 4.0.6
Let me first recreate the issue and see the error for myself. I will create a test script, 'mail.php', that will use the 'mail()' function for sending mail. The test script is as follows.
<?php error_reporting(E_ALL); $to = "Vishnu <we2cares@fastmail.fm>"; $subject = "This is a test email"; $message = "This is a test email, please ignore."; if (mail($to,$subject,$message)) { echo "Thank you for sending email"; } else { echo "Can't send email"; } ?>
I placed mail.php into the web area of the virtual domain in question, and then accessed it through a browser. The resulting page displayed an echoed message "Can't send email", no PHP specific error messages were shown. Analyzing the maillog shows no trace of mail injected from the virtual domain in question. I needed to verify whether the issue is specific to the virtual host in question or whether the issue is server wide. The same test script was used in a few other virtual hosts and produced the same result. That means the issue is not specific to a virtual host, it's a server wide issue. Looks like Apache/PHP is broken.
Let me see whether the mail() function is disabled in the php.ini file using disable_functions.
[root@dns1 root]# grep disable_functions /etc/php.ini disable_functions =
No, mail function is not disabled.
I have turned on display_errors and display_startup_errors in php.ini so that any internal PHP error is displayed on the web page, but that didn't help either. The test PHP page doesn't display any error, there are no error messages in the Apache, sendmail or other system logs. What next?
[ When debugging a production system be sure to use your development system (which, by the way, should be identical to and separate from your production environment). If you cannot avoid using the production system for this, make sure your error messages are never displayed to the browser and redirect them to your logs. This is more secure and doesn't bother your end users with messages they can't understand. -- René ]
As I mentioned earlier, in order to know what's happening at the process level, the strace utility is very useful. Before using strace to troubleshoot the issue, I will give a brief explanation of how Apache serves an incoming request.
Apache starts by creating a parent process with root privileges. This is the main process, it is responsible for forking child processes and maintaining them. The main Apache process doesn't serve any requests, the requests are served by child processes. The number of idle child processes at a given time is determined by the MinSpareServers and MaxSpareServers directives in httpd.conf. When a new request comes in, it is served by one of the idle child processes. If there are no idle child processes, then a new child process is forked by the parent, to serve an incoming request. From the ps result shown below, it's clear the the process with PID 1861 is the Apache parent process. It is running with "root" privileges, all the child processes are running as user "apache".
[root@haqmail ~]# ps aux | grep httpd USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND root 1861 0.0 0.4 25680 1992 ? Ss Sep24 0:02 /usr/sbin/httpd apache 2295 0.0 0.4 25852 2024 ? S Sep24 0:00 /usr/sbin/httpd apache 2296 0.0 0.4 25852 2024 ? S Sep24 0:00 /usr/sbin/httpd apache 2297 0.0 0.4 25852 2024 ? S Sep24 0:00 /usr/sbin/httpd apache 2298 0.0 0.4 25852 2024 ? S Sep24 0:00 /usr/sbin/httpd apache 2299 0.0 0.4 25852 2024 ? S Sep24 0:00 /usr/sbin/httpd apache 2300 0.0 0.4 25852 2024 ? S Sep24 0:00 /usr/sbin/httpd apache 2301 0.0 0.4 25852 2024 ? S Sep24 0:00 /usr/sbin/httpd apache 2302 0.0 0.4 25852 2024 ? S Sep24 0:00 /usr/sbin/httpd
A better view of parent child relationship is available from the pstree result.
[root@haqmail ~]# pstree -p | grep httpd +-httpd(1861)---httpd(2295) | +-httpd(2296) | +-httpd(2297) | +-httpd(2298) | +-httpd(2299) | +-httpd(2300) | +-httpd(2301) | +-httpd(2302)
Now we know that our request for mail.php is served by one of the Apache child processes and that strace can be used to get details of how requests are served, but there is one more problem: Which child process serves the mail.php request? We either know the process ID of the exact child process, or we trace all the child processes and then sort out the output of strace. There is no way to know in advance which child process will serve the mail.php request, so we will have to trace the parent Apache and all its child processes. The "-f" strace option traces child processes as they are created by currently traced processes as a result of the 'fork' system call.
Here we go...
First stop Apache and then restart Apache with strace.
[root@dns1 root]# strace -f -o trace.txt /etc/rc.d/init.d/httpd start
The "-o" option saves the result to "trace.txt" file. Now access the test PHP page through the browser. Stop strace and restart Apache as usual. It may be necessary to send the strace process a SIGKILL signal, because it captures some signals it gets from the terminal session.
Let us now go ahead and examine the strace result in the trace.txt file.
[root@dns1 root]# grep mail.php trace.txt 21837 read(7, "GET /mail.php HTTP/1.1\r\nUser-Age"..., 4096) = 472 21837 stat64("/var/www/virtdomain/mail.php", {st_mode=S_IFREG|0644, st_size=587, ...}) = 0
From the above grep result it's clear that the Apache child process serving our request for mail.php is the one with PID "21837". Now grep trace.txt for "21837". Relevant result is pasted below.
21837 chdir("/var/www/virtdomaint") = 0 21837 open("/var/www/virtdomain/mail.php", O_RDONLY) = 8 . . . . 21837 fork() = 21844
The last line shows that the Apache child process forks another process with PID 21844. Let us grep for 21844 in trace.txt and see what it does.
21844 execve("/bin/sh", ["sh", "-c", "/usr/sbin/sendmail -t -i"], [/* 21 vars */]) = -1 EACCES (Permission denied)
Well, the process is used for sending mail using /usr/sbin/sendmail, but an incorrect permissions prevent it doing so. Sendmail permissions are set correctly, but checking /bin/sh reveals that it is set to "770" with "root.root" ownership. Since the Apache child process is running as user "apache", it doesn't have read and execute permission over /bin/sh, and hence the issue. Changing /bin/sh permission to "755" fixed it.
[ At this point I'd be inclined to wonder why the file permissions were wrong. It could be that someone is tip-toeing through your filesystem. -- Steve Brown ]
With a basic understanding of Apache and using strace, we could find the root cause of the issue and hence fix it. Strace is a general purpose utility and it can be used to troubleshoot any program. Strace and GDB (GNU Debugger) are very useful in system level troubleshooting. Here is a good article discussing both the utilities.
Talkback: Discuss this article with The Answer Gang
I'm an MTech. in Communication Systems from the IIT Madras. I joined Poornam
Info Vision Pvt Ltd. in 2003 and have been working for Poornam since then.
My area of interest are performance tuning, server monitoring, and
security. In my free time I practice Karate, read books and listen to music.
BKCSS.RVW 20061021
"Classic Shell Scripting", Arnold Robbins & Nelson Beebe, 2005, 0-596005-95-4 %A Arnold Robbins %A Nelson H. F. Beebe %D February 1, 2005 %G 0-596005-95-4 %I O'Reilly Media %O http://www.amazon.com/Classic-Shell-Scripting-Arnold-Robbins[...] %P 534 pages %T Classic Shell Scripting
When I first started reading "Classic Shell Scripting" by Arnold Robbins and Nelson H. F. Beebe, the quality of the content inspired me to write a review of this book. As opposed to most books on the subject that only explain and give examples of syntax, this book aims to develop in the reader a deeper understanding and true mastery of the POSIX shell.
The UNIX toolbox philosophy has been (to use a description from Robert Floyd's acceptance of the ACM Turing Award) a staple "Programming Paradigm" for UNIX programmers for several decades. In addition, to developing a better understanding of existing UNIX tools, this book will help programmers understand the ideas behind "pipeline programming": producing programs that take data sources, process the data in a sequence of serially connected steps and output the end result.
For example, the "wc" command provides standard counts of characters, words, and line numbers in a given file. To count all of the non-comment lines in all shell scripts (assumed to end with .sh) in ~/bin:
cat `find ~/bin/ -name "*.sh"` | sed -e '/^ *#/d' | wc -l
This simple pipeline creates a data source containing all lines from all shell scripts ending with .sh in your ~/bin directory. The sed command then deletes any lines beginning with comments and the wc command counts the remaining lines. Commands like this make it easy to report how many lines of code an application has. This is a simple example of what can be done with UNIX pipelines. This book helps the reader develop the skills to write such programs.
The book begins with an inviting preface and draws the reader in, right away. The authors mention that the point of learning shell scripting is to obtain proficiency in using the UNIX toolbox philosophy. This entails using a combination of smaller specialized tools to accomplish a larger task. As such, the book focuses on several main themes:
Each chapter contains background and introductions into the Unix toolbox philosophy. Collectively, they emphasize the need for and eventual standardization of the Unix utilities through the POSIX standard, the main points of which are:
[ Much of the above recapitulates the "Main Tenets" from Mike Gancarz' "The UNIX Philosophy" (Digital Press, Newton, MA 1995, 151 pp., $19.95, ISBN 1-55558-123-4) - another excellent read. -- Ben ]
I found the book to be a plethora of interesting ideas and command descriptions. Rather than describe each chapter in detail, I have chosen to present a sequence of "factoids" containing the "Classic Shell Scripting" content I found most interesting. I should mention that these are just a sample of the types of things you can learn by reading this book.
The built-in command echo is not as universal as you might first think. The BSD version allowed the switch "-n" to disable the printing of a newline after the string:
echo -n "Enter Choice: "
The System V version of echo used another approach for the same purpose: They choose to implement a version that recognized a special escape sequence "\c", so the above would become:
echo "Enter Choice: \c"
The current POSIX standard on echo does not include the "-n" option. On my Linux system there are two echo commands: one in the shell and other located in /bin. The System V behavior can be implemented with a "-e" switch
echo -e "Enter Choice: \c"
The purpose of this discussion is that echo in shell scripts may not be as portable as one imagines. For very simple string printing, this is not usually a problem. For more complicated situations, one should use the POSIX standardized command "printf". To do the above with printf, one would use (as the newline is provided by default):
printf "Enter Choice: "
If a newline was desired, it could be inserted with "\n".
Debugging shell scripts can be a simple as inserting a "-x" in the shebang head of a shell script. For instance, replace
#!/bin/sh
with
#!/bin/sh -x
The "-x" flag results in the shell echoing each and every command before the command gets executed. Each sub-shell created also increments a prompt, so you can tell at what stack level each command executed. For instance, if your script is
#!/bin/sh cat $1 echo `wc $1`
Given an input set of files (file{1,2,3}) like the following (the $ being the shell prompt)
$cat file1 file2 $cat file2 file3 $cat file3 Finally some data!
the script (degEg.sh):
#!/bin/sh -x cat file1 cat $(cat file1) cat $(cat $(cat file1))
produces for output
+ cat file1 file2 ++ cat file1 + cat file2 file3 +++ cat file1 ++ cat file2 + cat file3 Finally some data!
Each level of "+"'s denotes the stack level in the script. For instance, the first command "cat file1" is at stack level 1 and produces the result "file2". The next command is
cat $(cat file1)
which must execute cat file1 first, before it can execute the "cat" command on the existing result. This "inner" call is performed at stack level 2, represented in the above as
++ cat file1
this result is file2, and is subject to the next cat command given in debugging output like
+ cat file2
with the result of "file3". The rest of the example is similar.
There has been a recent push in the POSIX community aimed at internationalization. As such, you can make your computer speak Italian and display help for the ls command, with
LC_ALL=it_IT ls --help
These are just a small sample of some of the interesting things this book has to offer. If you are a shell programmer who wants to take his/her skills "to the next level", you should consider reading this book.
Talkback: Discuss this article with The Answer Gang
John Weatherwax started running Linux when his undergraduate Physics
laboratory switched to it from a proprietary UNIX system. He was
overwhelmed with the idea that by individual donations of time and
effort such a powerful operating system could be created. His
interests are particularly focused on numerical software and he is
currently working on some open source software for the solution of
partial differential equations. He hopes to complete that project
soon.
When he is not working on his various scientific endeavors he spends
his free time with his wife and their 9 month old daughter.
These images are scaled down to minimize horizontal scrolling.
All HelpDex cartoons are at Shane's web site, www.shanecollinge.com.
Talkback: Discuss this article with The Answer Gang
Part computer programmer, part cartoonist, part Mars Bar. At night, he runs
around in his brightly-coloured underwear fighting criminals. During the
day... well, he just runs around in his brightly-coloured underwear. He
eats when he's hungry and sleeps when he's sleepy.
By Samuel Kotel Bisbee-vonKaufmann
Greetings from the Gazette's reference section! Recently I offered my crossword writing services to the Linux Gazette, which our fair editor was rather ... enthused about.
[ Sam, you didn't think I *meant* it about the caviar and champagne served by the Playboy bunnies, did you? Some people will take a figure of speech _so_ seriously... -- Ben ]
My goal with these puzzles is to provide a Linux and FOSS vocabulary brain teaser. Some of the clues and answers will be purposefully obscure in the hopes that you will look them up once you have completed the puzzle. I will also surround said parts with easier words or common "geek" culture references.
I am open to commentary, so if you feel I could be doing something better or you think you have found an error, please let me know. And before you ask, yes, I do hope to increase the size of the puzzles to the standard 15x15 (time permitting).
To your enjoyment and much head scratching,
--Samuel Kotel Bisbee-vonKaufmann, "ravidgemole"
1
|
2
|
3
|
4
|
5
|
* |
6
|
7
|
8
|
9
|
10
|
* |
11
|
|||||||
12
|
* |
13
|
|||||||
* | * | * |
14
|
15
|
|||||
16
|
17
|
18
|
* |
19
|
|||||
20
|
* |
21
|
22
|
||||||
23
|
24
|
* | * | * | |||||
25
|
* |
26
|
27
|
28
|
29
|
||||
30
|
* |
31
|
|||||||
32
|
* |
33
|
Across 1: Chown, not chmod 6: _assassin 10: Using libcss in USA 11: Tiny : DSL :: _ : Ubuntu 12: Test Software 13: Grab music from other's iPod 14: Popular collection of scientific programs 16: Mozilla's web CVS tool 19: Interface Data Unit 20: Upload : FTP :: Update : _ 21: "32"s in ASCII 23: Maintenance package's daemon name 25: Uses sin, cos, ang., tan 26: "State of MA _ toward open source" 30: Study of electricity & electronics 31: /, sysop, x in f(x) = 0, sudo's ~ 32: _tat, package of sar, iostat, and mpstat 33: Sniff |
Down 1: Output in base eight 2: Adv. queuing method 3: US gov. term for the Internet 4: Popular LISP based text editor 5: "_ the source code!" 6: _metal theme 7: Often world readable directory 8: A howto 9: Connects chipset's northbridge and RAM 15: Grip, SoundJuicer, cdparanoia 16: Clients flooding a server's connection 17: "Java & XML Data Binding" bird 18: Script kiddie tools 22: Plugin 24: Many of a common bot 27: Ectoplasmic _ 28: Magn_ 29: Detects buffer overflows on stacked vars |
1
E
|
2
D
|
3
I
|
T
|
4
R
|
E
|
5
S
|
* |
6
L
|
D
|
D
|
* |
E
|
* |
P
|
* |
I
|
* |
E
|
* |
7
D
|
I
|
F
|
F
|
8
F
|
I
|
N
|
9
D
|
* | * |
D
|
* |
* | * |
10
T
|
I
|
D
|
11
E
|
* |
12
C
|
13
S
|
* | * |
A
|
* |
X
|
* |
H
|
14
T
|
A
|
I
|
L
|
* |
I
|
* |
M
|
T
|
* | * |
O
|
* |
T
|
* |
O
|
Y
|
* |
15
S
|
G
|
* | * |
16
O
|
D
|
Talkback: Discuss this article with The Answer Gang
Samuel Kotel Bisbee-vonKaufmann was born ('87) and raised in the Boston, MA area. His interest in all things electronics was established early as his father was an electrician. Teaching himself HTML and web design at the age of 10, Sam has spiraled deeper into the confusion that is computer science and the FOSS community, running his first distro, Red Hat, when he was approximately 13 years old. Entering boarding high school in 2002, Northfield Mount Hermon, he found his way into the school's computer club, GEECS for Electronics, Engineering, Computers, and Science (a recursive acronym), which would allow him to share in and teach the Linux experience to future generations. Also during high school Sam was abducted into the Open and Free Technology Community (http://www.oftc.org), had his first article published, and became more involved in various communities and projects.
Sam is currently pursuing a degree in Computer Science at Boston University and continues to be involved in the FOSS community. Other hobbies include martial arts, writing, buildering, working, chess, and crossword puzzles. Then there is something about Linux, algorithms, programing, etc., but who makes money doing that?
Sam prefers programming in C++ and Bash, is fluent in Java and PHP, and while he can work in Perl, he hates it. If you would like to know more then feel free to ask.