...making Linux just a little more fun!
Hi all,
I like to add a reminder software with my system. I have tested calendar and am not satisfied with it. now I have very sophisticated reminder tool called remind . its tkremind front end adds special weight to remind by making configuration as easy as Kalarm of KDE. I am looking for a tool which can make the message (from remind) appear on my desktop. could any one give any idea ?
[Thomas] Assuming remind uses some transient form to store the reminder somewhere, then you can use 'osd_cat' or some other utility to have it "display" on the screen.
I have found another tool called xmessage -
[Thomas] That has been around since the 80s.
like
xmessage -file <filename>
so you have to write your message in a file (ascii) first.
[Thomas] You don't even need to do that. xmessage can read from STDIN as well.
Hi there,
I'm wondering if it's wise to allow a remote user within the LAN to log in as root, by adding that user's public key to root's "authorized_keys" for that machine.
[Kapil] There is an "sudo"-like mechanism within SSH for doing this. In the authorized_keys file you put a "command=...." entry which ensures that this key can only be used to run that specific command.
All the usual warnings a la "sudo" apply regarding what commands should be allowed. It is generally a good idea to also prevent the agent forwarding, X11 forwarding and pty allocation.
Here is an entry that I use for "rsync" access. (I have wrapped the line and AAAA.... is the ssh key which has been truncated).
from="172.16.1.28",command="rsync -aCz --server --sender . .", no-port-forwarding,no-X11-forwarding,no-agent-forwarding,no-pty ssh-dss AAAA..... rsyncuser
I'm writing some scripts to back up data on our small business network here. One option is to get each machine to periodically dump its data on a specific machine using NFS. The option I'm interested in is to get a designated machine to remotely login to each machine and transfer the files over a tar-ssh pipe.
The only reason to be using root access is because some directories (/root, some in /var/lib) can only be read by root. Would changing permissions (e.g. /var/lib/rpm) affect anything, if I chgrp the directories to a "backup" usergroup?
I'm concerned with one machine, a web server, that will be included in the backup scheme. All machines here use Class A private network addresses and are behind a NAT firewall, but the web server can be accessed from the Internet. Will allowing root login over ssh on that machine pose a huge security risk, even by allowing ssh traffic from only the local network?
[Rick] I just happened to be reading the main discussion list at the Linux Documentation Project, and saw this post that appears intended for the Gazette, instead.
It seems to be intended as a correction to http://linuxgazette.net/114/misc/tag/mike.show-mtime.pl.txt .
[Ben] Thanks, Rick - that's great. I'm going to CC Tony on this; perhaps he'll find it useful.
Hi All,
There's an issue with the above file - that just caused my syslog to jump from about 5mb to 150mb.
"Use of uninitialized value in numeric ne (!=) at mike.show-mtime.pl.txt line 7"
If someone could patch it, i'd be grateful
Diff:
--- mike.show-mtime.pl.txt.orig 2005-08-27 07:43:18.000000000 -0400 +++ mike.show-mtime.pl.txt 2005-08-27 07:43:25.000000000 -0400 @@ -3,7 +3,7 @@ my( $a, $b ) = 0; { $b = ( stat "foo" )[ 9 ]; -if ( $a != $b ){ +if ( $a ne $b ){ print scalar localtime, ": $b"; $a = $b }
[Ben] Hi, Tony -
(Even though the script is credited to Mike Orr, I'm the author of it - it just got misattributed. I don't expect him to answer for my sins.
The problem with the above is that you don't have a file called "foo"; as a result, 'stat "foo"' comes out empty, and the rest of the problem follows from that. The above was never meant to be a complete script, simply an example of the algorithm for Suramya (I seem to recall that's who started the original discussion) to use.
I agree that changing "!=" to "ne" would "fix" the problem - but it would be the wrong problem. Pointing the script at an existing file - perhaps by changing 'stat "foo"' to 'stat $ARGV[0]' and specifying the filename on the commandline - would seem to me to be the "right" solution. Of course, the rest of the script should be rewritten to meet real-life conditions as well... but that's beyond the scope of what we were talking about.
http://www.linux-watch.com/news/NS8124627492.html
............... Reason number one: Linux is too complicated Even with the KDE and GNOME graphical windowing interfaces, it's possible -- not likely, but possible -- that you'll need to use a command line now and again, or edit a configuration file. Compare that with Windows where, it's possible -- not likely, but possible -- that you'll need to use a command line now and again, or edit the Windows registry, where, as they like to tell you, one wrong move could destroy your system forever. ............... |
[Sluggo] When people say "Linux doesn't have enough applications", that usually translates to, "Linux doesn't have certain specific applications, namely MS Office, Photoshop, Yahoo Messenger with webcam, etc."
[Heather] StarOffice, (ok photshop is a fair dig, we have lots of things like it, but none are trying hard to be it), yahoo with webcam == ayttm but most people don't know that.
In a particularly ahem lively discussion at the starport last week, it was agreed that the problem is that menu interface standardization isn't. Not even by Mr.Tanenbaum's broad definitions.
For example RH has been using Gnome for awhile, but has changed the menu layout in every major revision and some of the minor ones.
K and Gnome fans alike can't decide whether to keep their menus top or bottom. If they were hidden you'd have a flip a coin chance of even knowing where to look.
To most newbs "click the root menu" may as well be hidden entirely, because they won't really think that's useful if they have mswin experience (it pulls up display settings) and if they don't have even that they're just plain lost.
[Sluggo] Interestingly, I was going to mention Outlook and Outlook Express, but I haven't heard much about them recently. Has their popularity diminished?
[Heather] Yes, and thunderbird's and eudora's have increased.
My friend Colleen is looking to start an article series on people starting from zero* into Linux. I will of course be encouraging her and helping her out, too.
* yeah, absolute zero, Kelvin. The kind of people who think "my god, at least something says where to start" when they look at Windows(tm), then are stalled because they're afraid of the rest of the menu.
Much as I didn't find linspire groovy, not to my beat, daddy-O, it serves an important duty for some.
Outlook Express comes as default with Windows, and it's not a particularly good email client. Thunderbird is taking that market.
Outlook is another kettle of fish: it isn't about using Outlook, it's about accessing an Exchange server. The big news back when Novell bought Ximian was that they open sourced Ximian Connector, so Evolution could access Exchange servers. I had a look through the code, and... it's a hack, basically.
Outlook and Exchange communicate with an extremely complicated protocol. Ximian Connector just connects to the Exchange web interface, if it's available, and basically acts as a screen scraper (not exactly: it works using a modified version of WebDAV, but it also screen scrapes to get enough data to be useful).
People who run versions of Exchange that don't have a web interface still have to stick to Windows. People who don't have to pull to get the web interface set up are also out of luck.
On the server end, it's not too bad. There are several open source Exchange alternatives that have equivalent features. The Outlook Connector Project (http://openconnector.org) aims to provide an open source set of MAPI DLLs to be used by open source projects (such as Kolab (http://kolab.org) or Open-Xchange (http://www.openexchange.com)), so Outlook can connect to them. Once the server end has been migrated away from Exchange, it's possible to bring in Linux at the client end with little disruption.
There's also work being done towards implementing the actual protocols used between Exchange and Outlook. Luke Leighton, formerly a Samba developer, has reverse engineered most (if not all) of the protocol (http://www.winehq.org/hypermail/wine-devel/2005/01/1054.html), and has started work on both client and server software (http://cvs.sourceforge.net/viewcvs.py/oser/exchange5.5).
The OpenChange (http://openchange.org) project is also working (slowly) towards an Exchange replacement. They seem to be focusing more on reverse engineering the database format used by Exchange, so there isn't too much overlap (so far).
[Suramya] Tag,
Got the following feedback on my Jabber install guide. It has some good advice for improvements so...
I've just installed Jabberd 2 server following your Guide (http://linuxgazette.net/112/tomar.html), and I want to share some experiences which could improve this guide.
[Suramya] Thanks for taking the time to email me with your feedback. Would you mind if I share this with the Linux Gazette so that they can publish it in their next issue. That will help other people who are trying to install a Jabber server.
No, of course I don't mind . You can share it with everyone, I've wrote it only to help others .
1) Since Jabberd 2.0s3 or Jabberd 2.0s4 (the newest one is 2.0s9) the Libidn library isn't installed automatically with Jabberd, so you should write it in your guide, that it must be installed BEFORE running ./configure in jabberd directory. It'll save time of people trying to follow your Guide.
[Suramya] Ok. I didn't know that. Thats a good thing to keep in mind for future releases.
2) It would be easier if you wrote some information on installing MySQL libraries, because I had to use --disable-mysql option, as I couldn't find right libraries (according to some mailing list mysql-client and mysql-devel libraries) and/or write, that it isn't a must - user can replace mysql by barkeley in c2s.xml configuration file.
[Suramya] hmm. I didn't put instructions on how to install mysql because http://mysql.com had instructions on how to install the mysql libraries. But thats something to keep in mind for the next version I guess.
Yeah, you can replace MySQL by Berkley DB, but I havn't set it up using that so I don't know how...
About MySQL - well, I've asked for it, because I had a lot of trouble with MySQL. At last I've menaged to install Jabberd 2 with MySQL after writing last letter to you. I've configured it with options
./configure CPPFLAGS=-I/usr/include/mysql LDFLAGS=-L/usr/lib/mysql --enable-debug
It looks like Jabberd 2 can't find MySQL even if it's installed (I've installed RPM's from my Red Hat install CD). So I think it would be a good idea to write something about CPPFLAGS and LDFLAGS options for those, who would have same problems as I had. It would be also useful, if you put some info on how to create SSL certificate for server (I've followed those instructions: http://jabberd.jabberstudio.org/2/docs/app_sslkey.html to create SSL Certificate and those: http://jabberd.jabberstudio.org/2/docs/section05.html#5_2 to set up Jabberd2 to use them).
I think that's all I had to write for now. Thanks for your answer.
3) Also, it would be a useful suggestion to enable debugging (run ./configure --enable-debug), so everyone could see what's going on, when something goes wrong.
[Suramya] Good suggestion. I will definitely add that in the next version.
By the way I want to thank you for such a good installing guide
[Suramya] Thanks. Glad you liked it.
OK, guys, this is a bit wacky. Somebody's going to have to telephone T.R., to advise him of a troublesome error in the genetikayos.com DNS, which in turn is clobbering deliverability of e-mail to hostname "linuxgazette.net". As a reminder, our primary DNS is at T.R.'s machine, and my nameserver pulls down copies from there as our second of (only) two nameservers.
T.R. is at [snip]. I've been trying to call him, but his line's been busy.
First thing I noticed, a few minutes ago, was bunch of SMTP error messages like this:
----- Forwarded message from Mail Delivery System <Mailer-Daemon@linuxmafia.com> ----- From: Mail Delivery System <Mailer-Daemon@linuxmafia.com> To: tag-bounces@lists.linuxgazette.net Subject: Mail delivery failed: returning message to sender Date: Sun, 11 Sep 2005 21:42:52 -0700 This message was created automatically by mail delivery software. A message that you sent could not be delivered to one or more of its recipients. This is a permanent error. The following address(es) failed: ben@[snip] all relevant MX records point to non-existent hosts ------ This is a copy of the message, including all the headers. ------
[Snip a copy of Jimmy's post to tag@, as addressed by my mailing list software to Ben's subscription address of [snip] .]
OK, so next step was to remind myself of what are our MX records, because I couldn't remember:
[rick@linuxmafia] ~ $ dig -t mx linuxgazette.net ; <<>> DiG 9.2.4 <<>> -t mx linuxgazette.net ;; global options: printcmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 38278 ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 2, ADDITIONAL: 3 ;; QUESTION SECTION: ;linuxgazette.net. IN MX ;; ANSWER SECTION: linuxgazette.net. 41736 IN MX 10 genetikayos.com. ;; AUTHORITY SECTION: linuxgazette.net. 41720 IN NS ns1.linuxmafia.com. linuxgazette.net. 41720 IN NS ns1.genetikayos.com. ;; ADDITIONAL SECTION: genetikayos.com. 1218 IN A 127.0.0.1 ns1.linuxmafia.com. 139310 IN A 198.144.195.186 ns1.genetikayos.com. 128120 IN A 64.246.26.120 ;; Query time: 53 msec ;; SERVER: 198.144.192.2#53(198.144.192.2) ;; WHEN: Sun Sep 11 22:02:21 2005 ;; MSG SIZE rcvd: 160
Some of you will be really sharp-eyed and immediately spot something eye-popping in the above (especially since I've called your attention to the possibility) -- but put yourself in my shoes, and imagine that all you saw, at first, is the single MX record, priority=10, pointing to genetikayos.com .
Ah, that makes sense: Mail to our domain other than mail to our mailing list subhost gets redirected to T.R.'s machine. So, I absent-mindedly carry out the standard next step in diagnosing SMTP problems, which is to attempt a manual SMTP session using /usr/bin/telnet -- and I got a heck of a big surprise:
[rick@linuxmafia] ~ $ telnet genetikayos.com smtp Trying 127.0.0.1... Connected to localhost.localdomain. Escape character is '^]'. 220-linuxmafia.com ESMTP Exim 4.44 (EximConfig 2.0) Sun, 11 Sep 2005 22:02:53 -0700 220-. 220-WARNING: Unsolicited commercial e-mail (UCE/SPAM), pornographic 220-material, viruses, and relaying are prohibited by this server and 220-any such messages will be rejected/filtered automatically, 220-depending on content. 220-. 220-By using this server, you agree not to send any messages of the 220-above nature. Please disconnect immediately, if you do not agree 220-to these terms and conditions. 220-. 220-Please contact postmaster@linuxmafia.com if you have any 220-enquiries about or problems with this server. 220-. 220-Find out more about EximConfig for the Exim mailer by visiting 220-the following URL: http://www.jcdigita.com/eximconfig 220 .
Er, what? That's my SMTP banner. Wait, didn't I just telnet into _T.R.'s_ SMTP port?
At this point, I gazed a bit higher up, re-read the "dig" output, and boggled:
; <<>> DiG 9.2.4 <<>> -t mx linuxgazette.net [snip] ;; ANSWER SECTION: linuxgazette.net. 41736 IN MX 10 genetikayos.com. [snip] ;; ADDITIONAL SECTION: genetikayos.com. 1218 IN A 127.0.0.1 ^^^^^^^^^
Um, OK. That now gets appended to the Big Book of Things Not to Do with DNS.
Just to double-check:
[rick@linuxmafia] ~ $ host genetikayos.com genetikayos.com has address 127.0.0.1
There are times when the loopback address is just not your friend. Writing SMTP-related DNS RRs is one of those times.
I'll probably keep trying to reach T.R. by telephone for a while: He's likely not very reachable by e-mail at the moment.
[Jimmy] Eight hours later...
T.R. has fixed his DNS
[rick@linuxmafia] ~ $ host genetikayos.com genetikayos.com has address 64.246.26.120
(This came up elsewhere. I remember grinding my teeth about it at the time, so I don't know why I didn't put it in the article.)
As well as having 10 different versions[1], RSS has two competing time formats: RSS 0.91/2.0 use RFC 822 (date -R) format, RSS 1.0 uses dc:date, which needs W3CDTF (a subset of ISO 8601: date --iso-8601, date --iso-8601='minutes', date --iso-8601='seconds')
Mark Pilgrim has a blog entry that explains a bit of the history behind this here: http://diveintomark.org/archives/2003/06/21/history_of_rss_date_formats
[1] I said 9 in the article, but RSS 3.0 came afterwards (and, IIRC, is nothing like the other 9 RSS formats)
...and so I have the keys to the mailbag this month (errors, etc., are my fault).
Heather has gotten hold of an English mobile phone, on loan from a friend, and is trying to get to grips with both SMS and being a tourist...
[Heather] OH BTW, ENGLAND IS NICE!
[Jimmy] Try to spend as much time as possible being a tourist!
[Heather] A POOR TOURIST, BUT STILL!
[Jimmy] Oh heck, all you have to do is photograph everything you see, and ask the most obvious questions that come to mind. It's fun, give it a whirl.
[Jimmy] Oh. Can't send you photos.
[Heather] CAPS ARE THE CLUE, THIS PHONE IS PRETTY OLD...
[Jimmy] Ah. Large and brick-shaped, prompting the question: do I drive this?
[Heather] IF YOU HAVE 2 ASK YOU CAN'T AFFORD IT :-P
Heather will be back in time for next month's issue.
We'll be heading off to the UK this weekend. Jim reports that we've been signed up for internet access at his hotel assignment, so I'll probably be able to send in a blurb, but hopefully I will have enough things to do that I won't be lurking in a hotel all that much.
[Ben] [smile] Enjoy yourselves and don't let your vacation time be spoiled by schedules. If you can get it in without stressing, cool; if not, there's always next month.
[Thomas] Indeed, although I am sure something can be arranged.
My poor mactop decided to have a fit when I tried to make it dual boot. One of the LUG locals is a truly guru macintosh fella though, so he's done his best to fix it up, and I'll have it back at the installfest. If it's not up to speed I'll end up taking terra with me instead... er, as soon as I seal her up. those lil bitty screws are kinda necessary after all if you don't want people gettin' weird because your laptop's falling apart. To its credit terra's hibernation works perfectly
Anyways there won't be cover art unless I'm taking the mac.
I've crosstrained Ben in using the g2 form of lgazmail to generate tips and the mailbag parts, by way of showing him the generation-phase on issue 118's data.
[Ben] FSVO "trained". You certainly gave it your best shot; the rest is up to that gadget I carry around between my ears. Don't worry, if it all falls down around my ears and everyone hates me and the government sends in the black helicopters, I promise to not blame you with my last dying breath.
Of course everyone won't hate you. We've got xteddy in the tag lounge.
In short, if ben would like to login to gemini he should be able to play virtual Heather this month. Hopefully that should serve in case I happen to be truly without 'net.
[Ben] Yeesh, options. I feel my brain melting already...
Bye now! See you next month!
[Ben] Enjoy the trip, both of you.
Oh yeah, someone tickle silentk; I think he got some more kudo letters
[Ben] Presumably, there's a "someone" here who knows how to do that. I hope.
[Thomas] You need a cattle-prod.
[Ben] [blink] I usually find that a nice cup of coffee does it for me, but I'll take your suggestion under advisement. (Now that I think about it, I've had a few mornings when it would probably have been just the thing...)
Kayos has just updated the SVN back-end to FSFS. Hopefully, this will result in fewer deadlocks - I guess we'll find out as time goes on.
[Jimmy] Sure enough, there have been fewer problems (so far) this month. Hopefully the trend will continue.
While most people know to turn off any services they don't want to offer the world, many do not realize this applies at the interface level as well as the service level.
[Kapil] Other than configuring this by editing the configuration files for the individual daemons that open the listening sockets, you can also use iptables/ipchains to block the (ir)relevant address/port pairs.
Here is the relevant portion of a file called "iptables.save" on a machine that runs a public web server and also accepts ssh connections.
*filter :INPUT DROP [0:0] :FORWARD DROP [0:0] :OUTPUT ACCEPT [0:0] -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT -A INPUT -d 127.0.0.1 -i lo -j ACCEPT -A INPUT -p tcp -m tcp --dport 22 -j ACCEPT -A INPUT -p tcp -m tcp --dport 80 -j ACCEPT COMMIT
You can enable this with
iptables-restore < iptables.save
You can add/remove ports according to what connections you wish to accept. You should probably also accept some icmp connections in order to avoid losing routing information.
A typical networked computer has two interfaces: lo (the loopback) and eth0 (the Ethernet). Most daemons listen on all interfaces unless you tell them otherwise. Obviously, your web server, mail server, and CUPS (printer) server must listen on the public interface if you want other computers to access them. But if you're running mail or CUPS only for your own computer, you should make them listen only on the localhost. This eliminates a bunch of security vunerabilities because inaccessible programs can't be exploited.
There are several portscanners available but good ol' netstat works fine if you're logged into the target computer.
# netstat -a --inet Active Internet connections (servers and established) Proto Recv-Q Send-Q Local Address Foreign Address State tcp 0 0 *:631 *:* LISTEN tcp 0 0 *:https *:* LISTEN udp 0 0 *:bootpc *:* udp 0 0 *:631 *:*
Add the "-n" option to bypass domain-name resolution. Here we see the secure web server listening on all interfaces (*:https), good. But CUPS is also listening on all interfaces (*:631, both TCP and UDP), bad. (We know port 631 is CUPS because that's what we type in our web browser to access the admin interface.) To make CUPS listen only on the localhost, I edited /etc/cups/cupsd.conf, commented the "Port 631" line and added "Listen localhost:631". (I like Listen better than Port because it shows in one line exactly which host:port combinations are in effect.) Note that you can't specify an interface directly, you have to specify the domain/IP attached to that interface.
Then I restarted the server and checked netstat again:
# /etc/init.d/cupsd restart * Stopping cupsd... [ ok ] * Starting cupsd... [ ok ] # netstat -a --inet Active Internet connections (servers and established) Proto Recv-Q Send-Q Local Address Foreign Address State tcp 0 0 localhost:631 *:* LISTEN tcp 0 0 *:https *:* LISTEN tcp 0 0 10.0.0.1:32775 example.com:imaps ESTABLISHED udp 0 0 *:bootpc *:* udp 0 0 *:631 *:*
Good, the TCP line changed to "localhost:631". The UDP line is still "*:631". I searched the config file and "man cupsd" for "udp" but found nothing. I guess that means you can't turn it off? I decided not to worry about it.
There's a new line in netstat: "10.0.0.1:32775 to example.com:imaps". It looks like Mozilla Thunderbird is automatically checking for mail. 10.0.0.1 happens to be my public IP. (IPs/domains changed to protect the innocent.) It connected to the secure IMAP port on example.com. 32775 was a free port the kernel chose at random, as always happens when you connect to an external server.
There's still one suspicious line, "*:bootpc". I'm not running a diskless workstation or doing any exotic remote booting, so what is this? "lsof" is a very nifty program that tells you which process has a file or socket open.
# lsof -i :bootpc COMMAND PID USER FD TYPE DEVICE SIZE NODE NAME dhcpcd 3846 root 4u IPv4 5398 UDP *:bootpc
I am using DHCP, which runs this daemon while you're leasing an IP. I ran "man dhcpcd" and searched for "bootpc" and "port". Nothing. I guess it uses that port for some unknown reason. I decided not to worry about it.
[Kapil] Not quite. You shouldn't be running the dhcp-server (which is what the dhcpd program is). You are using dhcp in client mode so you should disable dhcpd from starting up.
[Peter] True, but the program in question listening on the UDP port 68 (bootpc) is "dhcpcd", not the dhcp-server which indeed has the name "dhcpd". When a client requests a DHCP address, a proccess (either "dhclient" or "dhcpcd") listens on UDP port 68.
It's eleven o'clock. Do you know which services your computer is running?
[Pedja] OK, what about this?
pedja@deus:~ ]$ netstat -a --inet Active Internet connections (servers and established) Proto Recv-Q Send-Q Local Address Foreign Address State tcp 0 0 *:6000 *:* LISTEN
That's X server, right?
root@deus:/home/pedja#lsof -i :6000 COMMAND PID USER FD TYPE DEVICE SIZE NODE NAME X 3840 root 1u IPv6 9813 TCP *:6000 (LISTEN) X 3840 root 3u IPv4 9814 TCP *:6000 (LISTEN)
I should add something like 'tcp -nolisten' to options that are passed to X when it starts(I use startx to,well,start X ). My question is where to?
[Thomas]
/etc/X11/xinit/xserverrc
Is the file you're looking for. By default (on most distros, anyway), the '-tcp nolisten' are set already.
[Pedja] There's no xserverrc in Crux, so I made one with
#!/bin/sh exec /usr/X11R6/bin/X -dpi 100 -nolisten tcp
in it. I've put it in my home folder.
[Pedja] Should I make an alias in .bashrc,like
startx () { /usr/X11R6/bin/startx -- -dpi 100 ${1+"$@"} 2>&1 | tee $HOME/.X.err ; }
or modify .xinitrc in ~, or... What's The Right Thing(tm) to do?
[Thomas] No alias. See above.
Hi,
I am a starter in GNU/Linux. I am using Linux Kernel 2.4.20-8 Redhat Linux 9.
I have written a TFTP client and server. I have created a UDP socket and as per the RFC i am sending a structure with the proper TFTP header and then data.
it is working fine and i am able to send and get files.
my problem is when i use ethereal and tell to capture the TFTP and specified port it shows that the packets are UDP + data. I think i should get UDP header , then TFTP header and then data. But this is not happening in my case. My TFTP header is also coming as data.
How can I solve this problem...
[Breen] You're not by chance using a non-standard port for your tftp server, are you? If the traffic isn't on port 69/udp, ethereal won't know to decode it as TFTP.
[Ben] I think that your best bet would be to look at a standard TFTP conversation and compare it to yours. There may be some subtle difference that you're missing, or perhaps a part of the RFC that you're misinterpreting.
I dont have any guide.. hope to get a reply and from you people.
[Ben] I have not read it myself, but I understand that Richard Stevens' "UNIX Network Programming" series is the classic reference for this kind of work.
Hi Breen
you are right.. i had used a non std port. so it was not showing it as TFTP.
[Breen] Hi Deepak --
I've got two requests:
1) Please don't post html. Email is a text medium.
2) When you ask a question on a mailing list, you should follow up on the mailing list. That allows all subscribers to benefit from the answer you receive. I've added The Answer Gang back to the recipients of this email.
Glad we were able to help you!
Is there any way to have multiple HTTPS domains on the same IP/port? The mod_ssl FAQ says name-based virtual hosts are impossible with HTTPS [1]. I've got two sites currently on different servers. Each is distinguished by a path prefix ("/a" and "/b"), so they aren't dependent on the domain name and can be installed in the same virtual host. The boss wants them consolidated on one server, and to plan for additional sites in the future. The problem is the certificates. A certificate is domain-specific, and it looks like you can have only one per virtual host.
So person A types https://a.example.com/a/ and it authenticates fine, but person B types https://b.example.com/b/ and gets a "domain does not match certificate" dialog. (I have seen this in some cases, but haven't gotten it in my tests. But it may be because we're still using unofficial certificates and getting the "unknown certificate authority" dialog instead.) The only solutions seem to be using a general domain for all the sites, getting a separate IP for each one, or running them on nonstandard ports.
[1] http://www.modssl.org/docs/2.8/ssl_faq.html ("Why can't I use SSL with name-based/non-IP-based virtual hosts?")
[Jay] Correct. You can't have more than one SSL server per IP address, because the certs are IP based, not domain name based.
They have to be, if you think about it, because you can't spoof IP [1] the way you can spoof DNS.
[1] unless you manage a backbone.
[Brian] I think, if your example is true, then [IIRC, you'll have to do more research] you can spend the bucks to get a wildcard cert that will handle [a-g].example.com/blah just fine. Alternatively, get extra IP addresses, alias the eth as needed, and multiple single-host certs can be applied. That works just fine. A separate set of SSL stanzas in each virtual host section, virtual host by number, not by name.
You may, in that case, actually want to run a separate invocation of apache for the SSL side of things, so that you can do IP-based virtual hosts for SSL, and name-based virtual hosts for port 80.
[Ramon] Because encryption is set up before any HTTP headers are sent, name based vhosting with multiple certificates is not possible.
The only thing that does work is multiple vhosts with one certificate that validates all of them. I've done that successfully with a project vhost server on ssl for multiple software development projects. You can get a wildcard certificate from rapidssl http://www.rapidssl.com for $199.
They're a dirt cheap certificate provider BTW $69 for a two year standard webserver certificate accepted in most (if not all) browsers
If it were a small organization that would be a possibility. But we're part of a large organization and can't monopolize the entire domain (*.example.com). At the same time the sites are for multiple departments, and we haven't been able to come up with a *.subdomain.example.com that would satisfy all of them.
Oh wait, you're talking about wildcard IPs rather than wildcard domains? (checking rapidssl website) No, it is domains.
Hmm, getting a wildcard certificate would obviate the need for multiple certificates but that's actually the lesser of our problems. The greater problem is getting more IPs, justifying them, and putting in a new subnet for them. But I guess I'll tell management that if they really want these multiple domains on one computer, they'll have to factor a new block of IPs into the price.
Has anybody had experience with https://cert.startcom.org/ ? It appears to be a nonprofit project geared toward free certificates.
"The StartCom Certification Authority is currently undergoing an initial self and successive third party audit as required by various software vendors, such as Microsoft and Mozilla. This will lead to the natural support of the StartCom CA by the most popular browser and mail clients. Right now you still have to import our CA certificate into your browser, but chances are, that, during the life-time of your certificate (one year), your certificate will be supported without the need of the CA import."
Probably not an option for us yet, but it looks worth watching.
Duh, our netadmin pointed out that when the second site is moved over, we can take the IP from that computer. And my other site will replace seven other servers so we can take their IPs too. That'll last us past the foreseeable future. Anybody got a few HTTPS sites they need hosting for a year or two? (Just kidding.)
Mozilla has started hogging my screen. I can select other windows, but if Mozilla is maximised it remains in front of them. There is presumably a setting somewhere that is causing this behaviour, but the only setting I can find I can't seem to change. FYI, this is in KDE.
If I right click the Mozilla title bar and select advanced->special window settings->preferences, there is a checkbox either side of the "keep above" setting. The checkbox on the right is checked and greyed out. With a little fiddling I can get it unchecked, but if I click OK and then reopen the window to check it, I find that it is selected again.
I don't know if that setting is the source of the problem, but the other windows don't have it checked, so it's a good candidate.
Any ideas how to fix this one?
OK. Going down into the "special window settings" wasn't necessary. If I just use "advanced->keep above others" it toggles that checkbox. It's annoying and a little confusing that it can't be changed from "special window settings".
[Ben] Hmm. Perhaps one or two - my Firefox started doing some ugly thing a while back, so I whacked it over the head a couple of times, and will happily relate what LART I used. Mind you, this is in the nature of shotgunning rather than troubleshooting (I can hear the sounds of retching from the other techies here, but, hey, it works - and I didn't feel like pulling down a hundred meg or so of code and wanking through it.)
- Move your ~/.mozilla to, say, /tmp/DOTmoz.
- Start Mozilla.
- If $UGLY_BEHAVIOR is still present, uninstall the mozilla package (making sure to blow away, or at least _move_ away all the stuff in "/usr/lib" and "/etc") and reinstall from scratch. If it's still there, curse life and file a bug. Otherwise -
- Make a copy of your new ~/.mozilla (as, say, /tmp/DOTmoz_default.) Start replacing the subdirectories in the one in $HOME, one at a time, from /tmp/DOTmoz until the problem reappears. Narrow it down to the specific file, then diff that file against the default one. The line causing the problem should be relatively obvious - since Mozilla uses more-or-less sensible, descriptive names for their config variables.
To (mis)quote the folks at the Mozilla Project, "it worked for me."
I'd say this was starting from the wrong end. Possibly my fault because I flagged it as Mozilla hogging the screen. With window behaviours like this, it's far more likely to be a window manager issue.
I have solved the problem now. You should have seen a followup email on the list.
[Ben] I've had similar problems (back in Netscape days, actually), and thought that it was the WM originally - it just made sense. Turned out to be that Netscape was doing some of its own craziness, at least in that case; I can definitely see where it could just as easily be the WM.
Hi Everyone, I have a couple of questions for the perl experts that seem to lurk around the TAG mailing list.
[Ben] Never heard of any around here. However, I do play one on a center stage once in a while, so I'll try to help.
I was playing around with the Yahoo Search API and decided to write a program that uses it to search for images based on user input and creates a collage from the results. I actually managed to get it to work (http://scripts.suramya.com/CollageGenerator) but need some help in fine tuning it.
The program consists of two parts: the frontend which is a php page and the backend which is a perl script. The PHP frontend writes the user input to a mysql DB which another perl script I call wrapper.pl checks frequently, when it finds a new row it calls the collage.pl that creates the collage.
[Jimmy] Um... is there any reason why the information has to be in a database? It seems like you're over complicating things: PHP is able to download files (IIRC, fopen can open from URLs), and Perl is well able to do CGI (use CGI , and can be embedded in HTML like PHP using HTML::Embperl (http://search.cpan.org/~grichter/HTML-Embperl-1.3.6/Embperl.pod). This page (http://www.cs.wcupa.edu/~rkline/perl2php) has a Perl to PHP 'translation', but it's also good for the other direction.
You can also directly embed Perl in PHP (http://www.zend.com/php5/articles/php5-perl.php), and PHP in Perl (http://search.cpan.org/~karasik/PHP-0.09/PHP.pm http://search.cpan.org/~gschloss/PHP-Interpreter-1.0/lib/PHP/Interpreter.pm), and Perl can read some PHP directly (http://search.cpan.org/~esummers/PHP-Include-0.2/lib/PHP/Include.pm).
The original machine where my site was hosted was not a very powerful machine so the collage creation took ages.
So I decided to use a client server model where I could run the backend on multiple machines and have each of them process a small portion of the requests which the system got. Thats why there's a DB involved so that I can keep track of who's working on what query and the backend can run on my home machine or a different more powerful system.
Right now I am running just one backend process but once I get most of the bugs worked out I will prob put them on other systems I have. (Just to decrease the wait time..)
Thanks for the links though, They will be useful in other programs I am thinking about.
Now my first problem is that I am using the following function to download the images to the local system for processing and I am not comfortable with it.:
sub download_images { my $url = shift; $url =~ s/\"/\%22/g; $url =~ s/\&/\%26/g; $url =~ s/\'/\%27/g; $url =~ s/\(/\%28/g; $url =~ s/\)/\%29/g; $url =~ s/\*/\%2A/g; $url =~ s/\+/\%2B/g; $url =~ s/\;/\%3B/g; $url =~ s/\[/\%5B/g; $url =~ s/\]/\%5D/g; $url =~ s/\`/\%60/g; $url =~ s/\{/\%7B/g; $url =~ s/\}/\%7D/g; $url =~ s/\|/\%7c/g; # print "Getting " . $url . "\n"; `wget -T 1 -t 1 -q $url`; }
Is there a way I can download the images fast to my computer without having to use wget? I download upto 10 images eachtime for creating a collage. I don't like passing results I get from the net directly to a shell but this is the only way I could get it to work. Another disadvantage of wget is that if it can't download an image it takes forever to timeout and goto the next url in the list.
[Ben] Take a look at the LWP toolkit at http://cpan.org ; it contains support for any kind of HTTP/FTP/NNTP/etc. usage you might want from within Perl. The above can be done this way:
use LWP::UserAgent; use HTTP::Request; # Create user agent my $u = LWP::UserAgent -> new; # Create request my $r = HTTP::Request -> new( GET => "$url" ); # Configure the request however you want - e.g., $r -> timeout( 10 ); # Pass request to UA my $ret = $u -> request( $r ); print "Error fetching $url" if $ret -> is_error();
There are much simpler ways to do it - i.e.,
perl -MLWP::Simple -we 'mirror "http://foo.bar.com"'
does the whole thing in one shot - but it's not nearly as flexible as the above approach, which allows tweaking any part of the interaction.
Thanks for the info. I will check out this package. It looks like it does what I want. How is this package speed wise/resource usage wise?
[Ben] Forgot to mention: this is untested code, just off the top of my head - but stands a reasonably good chance of working. See 'perldoc LWP::UserAgent' and 'perldoc HTTP::Request' for the exact public interface/usage info.
Ha ha, don't worry I had guessed that this was the case. Afterall I can't expect you to do all the work for me... I will try out the code and let you know how it went.
The second problem is that my mysql connection seems to drop at random times during execution. What can I do to prevent the mysql server from going away?
[Ben] 1) Stop shelling out. If in doubt, read "perldoc perlsec" (Perl security considerations) - and then stop shelling out. This includes command substitution (backticks) as well as the 'system' call.
2) In any interaction involving file system calls the timing of which could affect the Perl functions, force the buffers to autoflush by setting the '$|' variable to non-zero. Oh, yeah - and stop shelling out.
Below is the code I use in wrapper.pl to check the DB for changes:
See attached wrapper.pl.txt
The script usually dies around the last $update->execute. What I think might be happening is that the collage.pl is taking too long to run and the DB connection times out, is that possible? Can I force the connection to not timeout? (I did try searching on google but didn't find any ways of changing the keep connection alive variable from a script).
Any idea's/suggestions? Thanks in advance for the help.
PS: Any suggestions on improving the script would be most welcome as I am using this to try to learn perl.
I'm trying to get rsync access to an OS X server with a paranoid sysadmin who doesn't know much about Unix progams. (He's a GUI kind of guy.) He's offered me FTP access to one directory but I'd really like to use rsync due to its low-bandwidth nature and auto-delete feature (delete any file at the destination that's been deleted at the source). His main desire is not to grant a general-purpose account on the server, so if I can convince him that rsync+ssh can be configured to grant access only for rsync in that directory, I may have a chance. But since they're two separate programs (as opposed to *ftpd and mysqld, which can have private password lists for only their program), I'm not sure how to enforce that. Would I have to use rsyncd alone, which I guess means no encryption? (Granted, ftp has no encryption either, but I think he's just using that due to lack of knowledge of alternatives.)
(And when is ssync going to arrive, to avoid this dual-program problem?)
[Benjamin] Take a look at rssh (http://www.pizzashack.org/rssh/index.shtml) or scponly (http://sublimation.org/scponly) - both can be used together with ssh to restrict access to just rsync.
However, access to a single directory would probably require a user jail - - all is explained in the rssh and scponly docs, but it's not really for your "GUI" types.
[Kapil] I suppose you mean something that combines ssh and rsync. In any case your particular problem might be solved by means of an authorized_keys file entry that looks like (this is all in one line one line)
from="202.41.95.13",command="rsync -aCz --server --sender $SRCDIR .", no-port-forwarding,no-X11-forwarding,no-agent-forwarding,no-pty ssh-dss AAAAB3NzaC1kc3MAAACBAMzBqJtkx52y3IL9g/B0zAna3WVP6fXDO+YZSrb8GsZ2Novx+1sa X/wBYrIXiKlm0LayvJpz7cF17ycWEyzANF9EivYbwMXPQWecxao82999SjKiM7nX2BUmoePN iUkqpnZuilBS2fPadkTZ7CSGevJ2Y9ryb1LOkFWkdBe2c4ETAAAAFQCUk+rB5HRYj0+KIn5H fiOF0dQtvwAAAIA7ezUaP6CpZ45FOJLunipNdkC0oWj3C5WgguwiAVEz3/I5ALAQ9hdmy+7+ Bi0hUGdkTcRoe4UPMgsahdXLZRNettMv+zdJiQqIiEnmzlQxlNY2LBlrfQyRwVU1SW3QWGog tssIoeOp9GRx7N5H2ZAzMoyGBaUDfHVQueI2BeJiGwAAAIEAhV0XWIm8c2hLAGRyYZeqi5G3 yDAB0DZPyiuYyfnBav3nqjkO65faDuajKUv6as90CGYVlixWO+tjOL+/D9c0DoqdMllPEiZw aGBxd96o1pVrsFw/Ff0jJOtxzj+/Tzgaw9AdI0Usgw1cfXwWS1kJlhXqR00O/ow/XATejWpW 0i8= kapil@neem
Here you must put the appropriate source directory in $SRCDIR.
The authorized key file can be put in a dummy users directory. This dummy user should have appropriate read/write permissions for the directory in question.
As an alternative you can use a configuration file "--config=$FILE" in place of $SRCDIR.
Once this is done, the owner of the SSH private key associated with the public-key (which is the bit that starts ssh-dss AAA....) can connect to the ssh server and start the above command and only the above command.
Hi there,
I'm not a TAG subscriber, so I can't see the list archives to verify, but hopefully this mail isn't repeating something that you've already had a dozen times this month.
[Thomas] So far, you're the first.
From September's gazette: "my machine only boots from floppy, and I want it to boot from cd" might be addressed with a smart boot manager, such as sbm. The debian (sarge) one credits James Su and Lonius as authors, and says it was downloaded from http://www.gnuchina.org/~suzhe , but it looks like the useful content can now be found at http://btmgr.sourceforge.net
[Thomas] Indeed. It has been mentioned in the LG in the past (twice by me, and once by Ben, I believe.)
[Ben] Wasn't me; I hadn't run across SBM until now.
[Thomas] It's OK, and provides a lot of elaborate features that can be quite interesting on certain types of hardware, it has to be said.
[Ben] As is often the case, Debian already has it as a package (pretty amazing folks, those Debian maintainers!) -
ben@Fenrir:~$ apt-cache search smart boot bmconf - The installer and configurator of the Smart Boot Manager sbm - Smart Boot Manager (SBM) is a full-featured boot manager
As Francis has already mentioned, though, it won't boot USB devices. Too bad; that would make it quite useful, especially given that modern kernels are too big to fit on a floppy anymore.
By the way - the fact that they are too big annoys the hell out of me. There are plenty of folks out there who need floppy-based booting - troubleshooting and booting weird hardware configurations are two situations where that capability can be critical - and "new systems all come with a CD-ROM" is NOT equivalent to "all existing systems have a CD-ROM". Yeah, older kernels, whatever; as time goes on, those become less and less useful - and support less and less common hardware. I'll admit that I'm coming from ignorance here, but - there should have been a way to make the kernel modular enough to provide the "compile small kernel" option instead of just losing this important capability.
Thanks for the reply. Oops -- I hadn't spotted that. I did try searching for "sbm", and all I found was a (presumably) mis-spelled samba config file. But now that I try again, searching for "smart boot manager", I see that it does appear in the archives.
No harm done.
"sbminst" it to a floppy to confirm that it can use your hardware, then consider putting it in your primary disk mbr, consigning lilo or other boot loader to a partition or secondary disk. Of course this last bit presumes that "my machine only boots from floppy" really means "my machine only boots from floppy or one hard disk", but that's probably a reasonable assumption.
Worked for me with an ATAPI cd drive that the BIOS didn't like. I suspect it won't work with the SCSI cd in the original problem, sadly. And am almost certain that it also won't work with the USB stick in the original original problem. So it isn't a full solution -- or even a useful solution in these specific cases -- but it might help someone with a slightly different problem.
Hi there,
I'm wondering if it's wise to allow a remote user within the LAN to log in as root, by adding that user's public key to root's "authorized_keys" for that machine.
[Kapil] There is an "sudo"-like mechanism within SSH for doing this. In the authorized_keys file you put a "command=...." entry which ensures that this key can only be used to run that specific command.
All the usual warnings a la "sudo" apply regarding what commands should be allowed. It is generally a good idea to also prevent the agent forwarding, X11 forwarding and pty allocation.
Here is an entry that I use for "rsync" access. (I have wrapped the line and AAAA.... is the ssh key which has been truncated).
from="172.16.1.28",command="rsync -aCz --server --sender . .", no-port-forwarding,no-X11-forwarding,no-agent-forwarding,no-pty ssh-dss AAAA..... rsyncuser
I'm writing some scripts to back up data on our small business network here. One option is to get each machine to periodically dump its data on a specific machine using NFS. The option I'm interested in is to get a designated machine to remotely login to each machine and transfer the files over a tar-ssh pipe.
The only reason to be using root access is because some directories (/root, some in /var/lib) can only be read by root. Would changing permissions (e.g. /var/lib/rpm) affect anything, if I chgrp the directories to a "backup" usergroup?
I'm concerned with one machine, a web server, that will be included in the backup scheme. All machines here use Class A private network addresses and are behind a NAT firewall, but the web server can be accessed from the Internet. Will allowing root login over ssh on that machine pose a huge security risk, even by allowing ssh traffic from only the local network?
...making Linux just a little more fun! |
By Jim Dennis, Jason Creighton, Chris G, Karl-Heinz, and... (meet the Gang) ... the Editors of Linux Gazette... and You! |
We have guidelines for asking and answering questions. Linux questions only, please.
We make no guarantees about answers, but you can be anonymous on request.
See also: The Answer Gang's
Knowledge Base
and the LG
Search Engine
From Adam Engel
Answered By: Ben Okopnik, Jimmy O'Regan
Hey Gang,
Does anyone know of a script or program that can convert a bitmap image to binary code and print the line(s) of ones and zeros to standard output?
[Ben] Adam, if I didn't know you, my "homework detector" sense would be tingling. As it is, I'm still a bit boggled as to why you'd want to do such a thing, but - all the standard tools that I can think of, off the top of my head, do octal and hex. Binary, well... heck, I'd do what I always do when it takes me more than a few seconds to think of an answer to that kind of question: reach for Perl.
perl -wne'printf "%08b ", ord $_ for split //' foobar.bmp
That'll give you a line of space-separated, 8-bit binary numbers representing the ASCII value of each character in the file. Just to play with the idea, converting it back wouldn't be any harder:
perl -wne'print chr eval "0b$_" for split' foobar.binary
[Jimmy] Sounds like pbm (http://netpbm.sourceforge.net/doc/pbm.html), minus the header:
P1 # feep.pbm 24 7 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 0 0 1 1 1 1 0 0 1 1 1 1 0 0 1 1 1 1 0 0 1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 1 0 0 1 1 1 0 0 0 1 1 1 0 0 0 1 1 1 0 0 0 1 1 1 1 0 0 1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 1 1 1 1 0 0 1 1 1 1 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
The netbpm tools are a standard part of Linux distributions everywhere. Image Magick and The Gimp are able to write it too.
[Ben] I just tried converting some text to PBM format (using 'convert' from ImageMagick), and it's not doing anything like the above; in fact, it created a file that's mostly full of nulls - which displays a white page with the word "hello" in the center when looked at with an image viewer.
Looking at a 1k chunk at the top of it, I get the following ("^@" is how 'less' displays a null):
ben at Fenrir:~$ printf "hello"| convert text:- pbm:-|head -c 1k|less P4 612 792 ^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@ ^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@ ^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@ ^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@ ^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@ ^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@ ^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@ ^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@ ^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@ ^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@ ^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@ ^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@ ^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@ ^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@ ^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@ ^@^@^@^@^@^@^@^@
[Jimmy] Whoops. I meant the 'Plain PBM' format, which you get using the pnmtoplainpbm program.
[Ben] [grin] One more time, with gusto...
I think you mean "pnmtoplainpnm" (I checked the Debian file list as well as doing a Google search - which found nothing but rebuked me with "Did you mean: pnmtoplainpnm"; there's no 'pnmtoplainpbm' in sight.) That, however, doesn't seem to do it either:
ben at Fenrir:~/Pics$ pnmtoplainpnm smile.pnm | head -1k| less P3 15 15 255 191 191 191 191 191 191 191 191 191 191 191 191 191 191 191 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 191 191 191 191 191 191 191 191 191 191 191 191 191 191 191 191 191 191 191 191 191 191 191 191 0 0 0 0 0 0 255 255 0 255 255 0 255 255 0 255 255 0 255 255 0 0 0 0 0 0 0 191 191 191 191 191 191 191 191 191 191 191 191 191 191 191 0 0 0 255 255 0 255 255 0 255 255 0 255 255 0 255 255 0 255 255 0 255 255 0 255 255 0 255 255 0 0 0 0 191 191 191 191 191 191 191 191 191 0 0 0 255 255 0 255 255 0 255 255 0 255 255 0 255 255 0 255 255 0 255 255 0 255 255 0 255 255 0 255 255 0 255 255 0 0 0 0 191 191 191 191 191 191 0 0 0 255 255 0 255 255 0 0 0 0 0 0 0 255 255 0 255 255 0 255 255 0 0 0 0 0 0 0 255 255 0 255 255 0 0 0 0 191 191 191 0 0 0 255 255 0 255 255 0 255 255 0 0 0 0 0 0 0 255 255 0 255 255 0 255 255 0 0 0 0 0 0 0 255 255 0 255 255 0 255 255 0 0 0 0 0 0 0 255 255 0 255 255 0 255 255 0 255 255 0 255 255 0 255 255 0 255 255 0 255 255
According to the NetPBM project page/"pbm" man page, "Plain PBM" is created by the "pnmtoplainpnm" utility - although they do state that it only works with monochrome images.
Ah-HA! I've got it - sorta. "pnmtoplainpnm" takes either a PNM or a PBM file, and applies a /reductio ad absurdum/ algorithm to produce the "simplest" version of the input format is. This does indeed do... something similar to what the manpage described:
ben at Fenrir:~/Pics$ printf "foo"|convert -crop 25x15+40+45 text:- pbm:-|pnmtoplainpnm P1 25 15 0000000000000000000000000 0000000000000000000000000 0000110000000000000000000 0000100000000000000000000 0001110111100011110000000 0000101100110110011000000 0000101000010100001000000 0000101000010100001000000 0000101100110110011000000 0000100111100011110000000 0000000000000000000000000 0000000000000000000000000 0000000000000000000000000 0000000000000000000000000 0000000000000000000000000
I don't know that it actually matches the question that was asked, though. :)
[Raj] Not sure if it might help you, but you can have a look at aalib ASCII-art library http://aa-project.sourceforge.net/aalib . It converts jpegs to ascii.
Or do you want to see the raw binary as it is stored in the bmp file ?
[Ben] Well, sorta. Very sorta. Unless you're a C programmer with time enough to write a converter that uses aa-lib. I've always thought that their demo ('bb') was fantastic, but... it's not something that ever really caught on, and I'm a sort of a Luddite in Linux clothing, so my opinion doesn't count.
What you might be thinking about is "aview/asciiview" - which is what comes closest to "converting" JPG to ASCII; the latter displays JPGs, as well as any other format recognized by the NetPBM kit, in an extremely rough ASCII approximation. However, "aview", which can only read PNM-formatted images, is capable of surprising image quality on a terminal with a tiny font:
xterm -fn 5x7 -geometry 1000x1000 # Enter this command in the new xterm convert logo: pnm:-|aview -driver slang
All good fun, but - still doesn't answer the question as posed. Unless I've totally misread it.
Actually, I just recently downloaded NetPBM for another application for JPEG conversion. As far as the "GUI" tools like Imagemagick and The Gimp go, I being a gui-phobe and graphically challenged to boot, I rarely use them except to demonstrate that there is "another way" to people who have thrown away several hundred dollars a year on Adobe Photoshop. There is no "practical" use for my question. It came out of my recent study of "sed" and the movie "Charlie and the Chocolate Factory" (in which Johnny Depp transforms a giant chocolate bar into a "TV" -sized package with an imaginary contraption). The "real" exorcize,
[Jimmy] Come come. It doesn't sound particularly difficult, let alone demonic.
in Chapter Six of O'Reilly's "Sed & Awk" concerns a bit-mapped file that is represented in binary format. Beyond the scope of the exorcize/sed script, I've been trying to "translate" bitmap images to binary code and vice versa to see if -- I know, I know, this is STUPID, but I never pretended to be a Guru, just a GoTTO -- one can send images as text-files to be "rebuilt" upon receipt so people with low-bandwidth connections wouldn't have to wait forever to download images.
[Jimmy] Erm... I think you have things backwards here. Getting images in a binary format takes less bandwidth than getting the same thing encoded as text.
I chose "binary" merely because that was the code used in the exorcise. My excuse for tardiness handing in my homework is part medical -- had to spend a few days off-line in every sense of the word, and part rational: I finally asked myself, "wouldn't people have thought of this before?
[Jimmy] Yep. uuencode, base64, etc. etc.
If it makes you feel any better, it is possible to use a text representation of binary data with the data: URI (http://www.ietf.org/rfc/rfc2397), which Mozilla and Opera support.
Here's a simple example: data:text/plain;charset=utf8,Ksi%C4%85%C5%BCka%20kucharska That just gives you a simple text page with some Polish text.
Here's a bigger example:
See attached data-test.html
Is that what you were thinking about?
That was it exactly. All that hullaballoo for "Dan and James' going-away present." But thanks. After ten years of sending images back and forth, it just occurred to me that I had no idea what was under the hood. Hence, it made "sense" that plain text would be "lighter" than gifs, jpegs, xml images etc. Thanks again for the explanation and the demo.
[Jimmy] Heh. I already had that example made up: it made sense to the original recipient :)
Hence, it must be doable and I merely can't figure out how to do it, or it's doable but not worth the time/effort and was a silly idea in the first place." On the other hand, if the numbers can be deconstructed in Perl, and reconstructed, why not? They laughed at Willy wonka, and look at him NOW.
From Rick Moen
Answered By: Jay R. Ashworth, Ben Okopnik, Sindi Keesan
I've been paying a little closer attention to SMTP errors, since the migration of LG's public mailing lists. Here's one for TAG subscriber Sindi Keesan. (I just received one each of these following Deepak and Ben's posts to the "TFTP problem" thread, and undoubtedly will get another for this one.)
From MAILER-DAEMON Thu Jun 02 11:19:23 2005 From: Mail Delivery System <Mailer-Daemon@linuxmafia.com> To: tag-bounces@lists.linuxgazette.net Subject: Mail delivery failed: returning message to sender Date: Thu, 02 Jun 2005 11:19:22 -0700 This message was created automatically by mail delivery software. A message that you sent could not be delivered to one or more of its recipients. This is a permanent error. The following address(es) failed: keesan@freeshell.org Unrouteable address [goes on to provide a copy of the undeliverable list post]
A lot of us have become accustomed to calling these "bounces" and disregarding them because they're so often cryptic and impenetrable. (My SMTP server, in general , gives pretty clear diagnostic messages, and yet this one was obscure to me, too.) Sometimes, the pedants among us distinguish Delivery Status Notifications (DSNs) from "bounces", where the former are three-digit SMTP-standard error codes and matching explanatory text, generated by the remote SMTP host (MTA process) during an SMTP conversation.
In this case, there's none of that "550 User unknown" or similar DSN stuff, and I was left curious what "Unroutable address" means, here -- especially since Ben and others have a pretty high opinion of Stephen M. Jones's "SDF Public Access UNIX System, INC." operation at freeshell.{org|net}.
I was intending to attempt a manual SMTP session with that system (by telneting into its mail exchanger (MX). The first step, then, is to ask the public DNS where freeshell.org's MXes are:
[rick@linuxmafia] ~ $ dig -t mx freeshell.org +short ; <<>> DiG 9.2.4 <<>> -t mx freeshell.org +short ;; global options: printcmd ;; connection timed out; no servers could be reached [rick@linuxmafia]
Hmm. Can that be right? No nameservers can be reached that are authoritative for the domain? First, let's cross-check to make sure I'm getting meaningful results for similar queries on other domains (i.e., that I don't just have network or DNS-access problems of my own):
[rick@linuxmafia] ~ $ dig -t mx apple.com +short 30 eg-mail-in1.apple.com. 10 mail-in3.apple.com. 10 mail-in4.apple.com. 10 mail-in5.apple.com. [rick@linuxmafia] ~ $ dig -t mx linuxmafia.com +short 10 linuxmafia.com. [rick@linuxmafia] ~ $
Yep, that's all looking good. Let's see what IPs are listed as freeshell.org's authoritative nameservers in the whois servers:
See attached whois-output.txt
Er, I might be missing something, but having all of one's nameservers be in-domain seems like a bit of a hazard. Sure, the top-level nameservers will also have their IPs as part of the DNS's "glue records", but the rest of us won't. And having only two nameservers is a bit thin.
[Ben] Indeed, it is a hazard; the few times that SDF has gone down, it was like being shifted sideway into an alternate universe in which it had never existed. The response from web browser, fetchmail, etc. amounted to "Freeshell? What's a Freeshell? Go away, you silly man - we have no time for psychotics with an overactive imagination."
As I've found out while researching my response to Jay, I think we've now found part of the reason for Stephen's problem: His third nameserver is getting ignored (not used), because of obsolete glue records in his parent zone. He needs to fix that.
It would be nice if Sindi Keesan or someone else whose domain name doesn't have "linux" in it would advise Stephen of that, and gently lead him by the hand to the www.dnsreport.com test CGI -- as that gives a nice overview of his problems (and, basically, a checklist).
[Sindi] I would be happy to send along a message to him from my address here if you tell me exactly what to say and where to address it to.
This reminds me of when someone knowledgeable at my local bbs figured out why the electric company's online billing site went in little circles, but they did not want to hear about it. (Nor were they interested in the fact that Spamassassin was dumping their enormous emailed bills for five different reasons, including green fonts, too many images, odd looking subject line or from, too much HTML, and 'porn', and the mails were too large to receive at my address without Spamassassin).
OK, try this:
According to the report on
http://www.dnsreport.com/tools/dnsreport.ch?domain=freeshell.org ,
your third nameserver (ns-c.freeshell.org) isn't in the authoritative
list in the .org records, even if you have it in the zonefile. Because
of that, it probably won't get DNS queries about freeshell.org, which
may partially explain the outage we had recently.
Also (as mentioned in that report), "freeshell.org." in your zonefile's SOA record is wrong, and probably should be "ns-a.freeshell.org."
That report also make some sensible-sounding suggestions about timeouts to tweak in the SOA record, which you might consider.
[Sindi] Who do I send this to? I am not very familiar with sdf, just use it for email and website.
Re-checking my first post to this thread:
See attached whois-output.txt
The indicated e-mail address appears to be that of Stephen M. Jones, proprietor.
Suggestion: "whois" is your friend.
[Sindi] I sent it to the address below with a short preface stating that a 'friend' suggested I pass along this information. Thanks. This situation reminds me of that of a friend whose daughter will not accept email from him if it is properly spelled - she knows he is dyslexic and insists that he write it himself, so when I write it for him we have to send it from his address not mine, and make sure to introduce some spelling errors. (And then her 8 year old mysteriously sends perfectly spelled emails 'all by himself'.)
[Jay] Two nameservers is indeed a bit thin... but on the other point, unless my understand of DNS is also thin, the parent nameserver is always going to hand you the glue, is it not?
It's going to hand you the glue records if it has them. One of the reasons I like the "DNS Report" test at http://www.dnsreport.com is that it shows you, by implication, the immense variety of ways to screw up one's DNS -- and one of them is to have missing or incorrect glue records in the parent zone. Recommended facility, anyway.
[Jay] And in return, nice tip. :-)
Very cool site. I wonder if he has a version that returns something more easily parseable, by, say, Nagios. Or, alternatively, will make his script available. Must look closer.
In fact, if you look closely at http://www.dnsreport.com/tools/dnsreport.ch?domain=freeshell.org , it is evident that Stephen M. Jones did at some point deploy a third nameserver, but that it's missing from the parent records, which explains some of his fragility problems. (He also has some minor SOA errors.)
[Jay] I was a touch surprised, though that you didn't demonstrate the handy-dandy "+trace+ option to dig, which I fell in love with the minute I found it:
Neat. Here's the result for linuxgazette.net:
See attached dig-trace-output.txt
I haven't done nearly enough playing around with new DNS tools: I'm one of those codgers who've been hanging onto nslookup and sulking about its ongoing demise. Thank you for pointing out that trick!
[Jay] I'm trying to figure out a reasonable way to automate running it and looking for changes; it's not quite tuned for that. Perhaps the dnsreport code would be easier to use that way.
[Jay] It automatically traces the domain down from the root, showing you the salient information at each step of the way; the important bit in this case was:
> freeshell.net. 172800 IN NS ns-a.freeshell.org. > freeshell.net. 172800 IN NS ns-b.freeshell.org. > ;; Received 82 bytes from 192.12.94.30#53(E.GTLD-SERVERS.net) in 81 ms > > ;; reply from unexpected source: 65.32.1.80#53, expected > 192.94.73.20#53 > ;; Warning: ID mismatch: expected ID 31090, got 36862 > ;; reply from unexpected source: 65.32.1.80#53, expected > 192.94.73.20#53 > ;; Warning: ID mismatch: expected ID 31090, got 36862 > ;; Received 31 bytes from 192.67.63.37#53(ns-b.freeshell.org) in 45 ms
Note that a) it thinks those servers are in freeshell.org, not .net, and b) that it appears that neither of them are answering the phone.
You can see that it did get an answer, though I'm a touch irked at dig that it didn't tell us what that answer was. The 65.32 servers are the customer resolver servers for Road Runner TampaBay, which is my uplink; why it saw fit to answer for itself I don't know -- clearly, since I did this from a Linux box, it should not have even been being asked...
But it appears still not to be running; perhaps the gent gave up?
Looks like he had a one-day outage, and is back.
[Jay] Well, good. We don't need that sort of service much anymore, but those that need it... need it.
Here's the list from my own domain:
Domain servers in listed order: NS1.LINUXMAFIA.COM 198.144.195.186 NS.PRIMATE.NET 198.144.194.12 NS1.VASOFTWARE.COM 12.152.184.135 NS.ON.PRIMATE.NET 207.44.185.143 NS1.THECOOP.NET 216.218.255.165
For some reason, my Tucows / OpenSRS registration lists the authoritative nameservers' IP addresses in the public DNS, while Stephen M. Jones's doesn't. I'm not clear on why this is.
Anyhow, that's at least something to go on. Let's find out what IP addresses the authoritative nameservers have:
[rick@linuxmafia] ~ $ host NS-A.FREESHELL.ORG NS-A.FREESHELL.ORG has address 192.94.73.20 [rick@linuxmafia] ~ $ host NS-B.FREESHELL.ORG NS-B.FREESHELL.ORG has address 192.67.63.37 [rick@linuxmafia]
Well, at least that much of his DNS is working.
[Ben] Wouldn't that be your DNS that's working? Unless I'm mistaken, "host" uses your /etc/resolv.conf to look up hosts - unless you specify another DNS server explicitly.
Well, DNS being the distributed system that it is, you're always using some client piece and some server piece. But what I meant is that at least that much of his DNS information is working (accessible and useful). It might very well have been cached in my or some other non-authoritative nameserver's records, yes. But I was wanting to fetch his authoritative nameservers' IPs from somewhere -- anywhere -- so that I could ask them questions directly, as the next step. Nothing like getting DNS answers straight from the horse's mouth, if you don't mind the rather unsanitary metaphor.
Let's ask the nameservers explicitly by their IP addresses, to make double-sure the query's going to the right place:
~ $ dig -t mx freeshell.org @192.94.73.20 +short ; <<>> DiG 9.2.4 <<>> -t mx freeshell.org @192.94.73.20 +short ;; global options: printcmd ;; connection timed out; no servers could be reached [rick@linuxmafia] ~ $ dig -t mx freeshell.org @192.67.63.37 +short [rick@linuxmafia] ~ $
How odd. Looks to me like the first nameserver doesn't respond, and the second returns some sort of null result. Just out of old-fogydom, and as a cross-check on "dig", let's do the same query using nslookup (a tool that's now deprecated, in general):
[rick@linuxmafia] ~ $ nslookup -query=mx freeshell.org 192.94.73.20 ;; connection timed out; no servers could be reached [rick@linuxmafia] ~ $ nslookup -query=mx freeshell.org 192.67.63.37 Server: 192.67.63.37 Address: 192.67.63.37#53 ** server can't find freeshell.org: SERVFAIL [rick@linuxmafia] ~ $
[Ben] I believe that "host" is the recommended replacement for "nslookup" these days; I groused a bit about having to learn its syntax, but it's quite nice once you do. It's a sort of a cross between "dig" and "nslookup":
host -t mx freeshell.org 192.67.63.37 freeshell.org MX 50 smtp.freeshell.org
Ah, it appears that I've underestimated the thing. Thanks.
In DNS lingo, SERVFAIL means that the domain does exist and that the root name servers have information on it, but that its authoritative name servers are not answering queries about it. So, basically Stephen M. Jones has one of his two nameservers offline, and the other misconfigured to the point that it stutters and faints when you ask it questions.
I hope it's a temporary glitch, but this (among other things) points out why continuity of DNS service is so important, and why two nameservers really aren't quite enough.
http://www.dnsreport.com/tools/dnsreport.ch?domain=freeshell.org is also interesting, giving an overview of just how much is broken here (a lot ) . (The freeshell.net variant of the domain has the same problem for the same reasons.)
Ben mentioned on a private mailing list that Stephen's a good guy and performs a generous service to the public but for reasons of personal experience loathes Linux. I vaguely remembered when that came about, and have re-found the rant he posted at the time, which still makes interesting reading:
http://web.archive.org/web/20010712145226/http://www.lonestar.org/sdf/
[Ben] Yeah, that was what I'd based my statement on; that, and the dance of boundless joy that he performed on the Freeshell list after the move was done (it's archived there, but it's not web-accessible AFAIK.)
I'll drop Stephen a brief, polite heads-up e-mail. I hope he won't mind my address. ;->
Stephen goes into a little more detail about his circa-2001 disenchantment with Linux, here:
http://mail-index.netbsd.org/port-alpha/2001/12/28/0008.html
Oh, what the heck, I might as well just quote it, because it's still relevant:
From rick@linuxmafia.com Sun, 15 Jul 2001 13:59:04 -0700 Date: Sun, 15 Jul 2001 13:59:04 -0700 From: Rick Moen rick@linuxmafia.com Subject: [CrackMonkey] How come I am just hearing about this?
begin Bob Bernstein quotation:
> Found on the netbsd-advocacy list: > http://www.lonestar.org/sdf/
It's got to really suck, being sysadmin of a public-access Unix system: You'll have an ungodly number of careless users, plus you have to worry about attacks both from arbitrary remote locations and attacks both by your users and by outsiders masquerading as legitimate users. When the day comes of you suddenly realising that your site has for some time been massively compromised, more often than not, you have only surmises about how entry and compromise occurred.
Numerous of Stephen Jones's statements suggest that such was the case with freeshell.org (aka "SDF"):
> I'm even thinking of just removing telnet/ftp/pop3 all together...
Plaintext-authentication network access to shell accounts: check.
> ...we might as well had our passwords in plain text as LINUX's use of > encryption is about twice as good as Microsoft's.
Unless I misremember very badly how the login process works, passwords are not processed in kernelspace. Some Unixes introduce a PAM layer, while others do not. Some support MD5; others do not. But the irony is that Stephen's users did have their passwords in plaintext -- every time they did telnet/ftp/pop3!
In fact, it's obvious that Steven's brand new to security-mindedness:
> I've never felt security was important because I sincerely thought > that a public system would be sacred ground to anyone be they a > cracker or just normal user.
Poor bastard. I'd say he's had a rude awakening, except that I think it's not even begun, yet.
His July 11 note suggests that he was still running Linux kernel 2.2.18, which has been security-obsolete for a dog's age. (Note exception: Some distribution provide kernels with nominally earlier version numbers that have been patched to have the fixes introduced in nominally later kernels.)
Stephen writes: > I blame LINUX due to the recent unveiling of the "oh, if root wasn't ^^^^^^ > already easy to get, here is an easy way" bug in the execve() system > call...
But that ptrace/execve() race condition (note: a local-user exploit) was not recent at all: It was a long while back. Wojciech Purczynski reported it to BugTraq on 2001-03-27:
http://www.securityfocus.com/templates/archive.pike?list=1&mid=171708&_ref=539250975
> ...where malicious code an be executed via almost any binary. ^^^^^^^^^^^^^^^^^
As Purczynski says, any SUID binary. But the point is that all this is very old news, 3+ months old. And extremely well known (as the "ptrace exploit").
Now, it seems likely that Stephen was still running an non-ptrace-patched 2.2.18 (or earlier) Linux kernel when the shit hit the fan, proving if nothing else did that he was asleep at the wheel -- but it's also clear that his system was security-exposed in a multitude of other ways, AND that he still is. (Example: He hasn't yet firmly deep-sixed telnet, POP3, and ftp inbound access mechanisms exposing shell passwords to the Net. He will.)
Let's do a taxonomy of root-compromise attacks (as opposed to DoS attacks and other categories): Rarely, these might be compromises of daemon processes or kernel network stacks from remote -- e.g., against vulnerable releases of BIND v. 8.x, lpr, or wu-ftpd. If the attack is not one of those, it must involve acquiring user-level access first, and then attacking the host from inside, impersonating a legitimate user. (In other words, the compromise of root authority is either from outside the host, or inside. Inside is much easier.)
The latter category breaks down further, according to how the attacker arranges to impersonate a legitimate user, into sniffing versus other. Sniffed passwords are, of course, what you get with standard deployments of telnet, non-anonymous ftp, and POP3 daemons -- and are a particularly ignominious way to get compromised. Stephen is only now thinking of shutting off this possibility -- so I fear he has other hard lessons yet to come.
The other ways of compromising shell-account passwords all trace back to the fact that users are pretty much always the weak element. If you let them, they'll use the same weak password everywhere. If you assign passwords yourself, change them at intervals, and remove the SUID bit from /usr/bin/passwd -- and sternly admonish the users not to expose their passwords through re-use on other systems -- they'll still do the latter, because they can, and because you can't stop them. Switch to one-time pad authentication, and they'll store the pads or seeds on vulnerable systems. And of course they'll ssh in, thinking that's unconditionally secure -- from compromised systems where attackers are logging all keyboard activity. And, remember, it takes only one user's shell access getting compromised. (Or, of course, the attacker might just sign up for a user account.)
You might be able to prevent system security from being shot in the foot by your users by requiring them all to use physical security dongles (e.g., SecureID) plus one-time pads. Maybe. But not on a public-access Unix system. In that sense, Stephen is screwed.
How so? Because he's doomed to having attackers occasionally get user-level access -- and protecting root against local users is much more difficult. While the remote attacker can fruitfully attack only running your network daemons and network stacks, as a local user he can attack any security-sensitive binary on your system -- a much wider field of targets. Stephen can try to keep installed software current, remove some, remove SUID/SGID from others, recompile using StackGuard, implement a capabilities model / ACLs, keep selected subtrees on write-protected media, and so on.
And he'll still get clobbered, from time to time. Odds are, he won't even be aware of compromise for quite a long time (as he probably wasn't, this time). Does he have IDSes set up? Of course not! He's "still against security". But that will change. Papa Darwin is a good, if ungentle, teacher.
NetBSD 1.5.1 on Alpha is going to be an eminently suitable system for him (even though the Alpha is doomed over the longer term). And Nick used to deliberately keep an Alpha on-line with a very vulnerable, antique OS load, just for the amusement value of watching x86-oriented kiddies' canned attacks crash and burn on it.
But Stephen has a longer-term problem, and it has nothing to do with kernel vulnerabilities -- let alone old ones that he should have long ago patched.
[Sindi] Thanks for pointing out to me that this thread has something to do with why sdf was gone yesterday, but it is way beyond my ability to understand. Could you summarize in a few sentences what this all means, for a beginning linux user with no computer training since Fortran IV?
As The Doctor says, "Ah, that takes me back -- or is it forward? That's the problem with time travel; you can never tell." ;->
(I likewise cut my teeth on FORTRAN, in my case playing around on university mainframes.)
If my post seemed a little bit meandering, it was because I was chasing down several things:
The first question fascinated me because, of late, I've taken a particular interest in understanding SMTP-protocol mutterings -- and have improved my own machine's articulateness in that area -- and yet the quoted advisory ( issued by my machine's MTA) was about as clear as mud.
So, the answer to turned to be: "I, the lists.linuxgazette.net SMTP process, couldn't even attempt delivery at all, because the destination domain's DNS is completely non-functional.
That also furnishes the answer to question #2: There was no DSN conversation because my MTA couldn't even look up the detination IP, let alone talk to freeshell.org's SMTP host.
And answer #3 was: "It's unclear whether SDF's main services themselves went down, because SDF's nameservice outage made those unreachable by name, even if they were still running."
So, as I was saying, continuity of DNS service is really, really important to Internet services, and Stephen M. Jones's DNS for freeshell.org (as presently configured) has proven to be fragile. And thus your (recent) problem.
I sent a short, polite head-up advisory to Jones, about his DNS outage. He didn't respond, but I gather from your post that he must have fixed it!
[Sindi] Thanks for you explanation which I hope I understood.
I had Fortran in high school, back before our high school even owned a computer. We typed out our programs on yellow paper tape which was sent to the other high school to run. In college our physical chemistry professor decided to teach us some useful skills, so in addition to learning to solder together a crystal radio, we ran programs on punch cards also in Fortran. In grad school it was still punch cards (one audited course on Pascal). The rest is self taught.
I think time actually goes in circles so that everything is new every once
From Rick Moen
Answered By: Raj Shekhar
I wrote:
> I was thinking TAG might be able to include or excerpt from this Usenet > thread?
[Jimmy] Look here for the thread in question.
This is how big a newsgroup/newreader fan I am: If you check headers, you'll notice that I'm posting this with the "tin" newsreader to newsgroup "lg.tag".
[Jimmy] Quoting Rick's headers:
Newsgroups: lg.tag Organization: If you lived here, you'd be $HOME already. User-Agent: tin/1.7.6-20040906 ("Baleshare") (UNIX) (Linux/2.4.27-2-686 (i686)) Date: Tue, 24 May 2005 19:50:14 -0700 XRef: linuxmafia.com lg.tag:8
What's up with that, you might wonder? Well, Mailman has a feature, unused by most listadmins, to bidirectionally (or otherwise) gate any mailing list with an NNTP newsgroup. Once set up, any mailing list post shows up (across the gateway) within a few minutes, as an article in the relevant group's news spool -- and, conversely, any article posted to the newsgroup's spool likewise gets hustled the other direction across the gateway within a few minutes, and mailed out by Mailman to mailing list subscribers.
Please note distinction: I did not say the mailing list goes out on Usenet. Usenet is a (very) large system of NNTP newsgroups, but many people also operate private newsgroups that have nothing to do with Usenet (other than also relying on NetNews Transport Protocol).
In my case, I use leafnode's 2.x betas to support local newsgroups, which I don't offer to other news servers. (At this date, you have to compile 2.0 betas, as local groups are a prerelease feature not included in the 1.x release series.)
For example, my local LUG, CABAL, has a mailing list called "conspire" -- conspire@linuxmafia.com. At the time I set that up, I also compiled leafnode 2.0b7, configured it to run under inetd, and added this line to /etc/leafnode/local.groups:
cabal.conspire y Local newsgroup for the CABAL Linux user group.
Then, I enabled Mailman's gateway feature for conspire@linuxmafia.com, et voila.
Basically, all I needed to do, today, was enable the gateway function for tag@lists.linuxgazette.net, and add this second line to /etc/leafnode/local.groups:
lg.tag y Linux Gazette's The Answer Gang
For a very long time, I had open posting to anyone who wished to connect to my machine using NNTP. That came to an end a few months ago on account of... guess what? NNTP spammers. So, at that point I locked down access using /etc/hosts.deny (using Wiese Venema's TCP Wrappers library): Anyone wanting remote NNTP access can still get it, but you have to send me your fixed IP address (if you have one) -- and I then add a permission line to /etc/hosts.allow.
What's so great about newsgroups, you ask?
Let's say you become interested in CABAL's mailing list / newsgroup today, May 24, 2005. You can, of course, join the mailing list -- allowing you to read and respond to new posts. (Older posts are browseable only via the Pipermail archive at http://linuxmafia.com/pipermail/conspire , and you can't easily respond to them.)
Or you can participate in it, in its newsgroup form. You have instant access to all posts ever made , all the way back to December 2000 -- and can continue the thread on any of them. You don't need to subscribe before you can post, or unsubscribe to stop being barraged with stuff.
And newsgroups settled the infamous private-versus-group reply issue properly , long ago. There are no dumbass flamewars over Reply-To, because "Followups" (group responses) are treated distinctly from mailed (private) responses.
The only drawbacks are (1) people being unfamiliar with newsreaders and how newsgroups work, (2) people confusing the general concept of "newsgroups" with whatever they've heard about Usenet, and (3) people stuck behind poorly designed firewalls that don't allow NNTP access (119/tcp) but do permit e-mail.
Fortunately, adding NNTP access to a Mailman list doesn't impair "regular" mailing list functions, but does allow a second avenue of access.
One little problem I haven't figured out how to fix: Mailman sends over only to the news server only new mailing list posts, starting with establishment of the gateway. So, lg.tag's news spool has only the five or six latest TAG posts in it, whereas cabal.conspire's has that entire mailing list's history.
Ben, if you feel like a challenge (since you have a shell account, here), Mailman keeps the mailing list's mbox at /var/lib/mailman/archives/private/tag.mbox/tag.mbox, which you'll be able to read/copy, even though you can't "ls" the directory it's in. (List members can fetch that file as http://lists.linuxgazette.net/mailman/private/tag.mbox/tag.mbox .) Leafnode has the corresponding mail spool inside /var/spool/news/lg/tag/ .
If you or anyone else can figure out how to get the rest of the mbox's contents into that news spool, I'd be grateful.
[Raj] There is a service run by news.gmane.org which turns publicly available mailing lists to news (or usenet posts, I am not sure of the correct term). They have a free nntp server running at news.gmane.org and you can follow quite a lot of mailing lists using you favorite nntp client.
I find it quite useful in following mailing lists in which have a bit of interest.
Excellent tip! I note that they have 6929 newsgroups, including "gmane.org.user-groups.linux.cabal". But, of course, anything you post there isn't going to get gated back into the mailing list in question: It's a unidirectional gateway, only.
[Raj] It is bi-directional http://gmane.org/post.php :-)
Well, I know that it isn't for the CABAL conspire mailing list, for example. I suspect it ends up being bidirectional only for mailing lists that permit posting from non-subscribers -- which have been becoming rare.
[Raj] I am not sure about the CABAL mailing list, but I use the gmane nntp interface to mail to the php.general mailing list (http://news.gmane.org/gmane.comp.php.general). The first time I put a mail into that mailing list, I got a mail from Gmane asking you to respond to it (the mail), to verify that I am a real human being not a spammer. After a few minutes, the posting appeared on the mailing list.
I checked on other archives of the php.general ( for example http://marc.theaimsgroup.com/?l=php-general ) to verify that it was appearing not only on the gmane's archive but on the real php-general mailing list .
php-general accepts mail from a "list of pre-approved mail addresses" -- basically an email id which has replied to its challenge when the user posted mail for the first time. I guess you can say it can accept mail from non-subscribers. I will check some mailing list which requires subscription (for example Linux-india-general http://news.gmane.org/gmane.user-groups.linux.india.general) and let you know the results.
At a guess, php-general may be set up the same way CABAL's conspire list is: Postings are accepted from subscribed addresses, while any from non-subscribed addresses are held for listadmin vetting.
That mailing list's subscriber roster includes, for example, member "goulc-conspire@gmane.org", which no doubt is the script that grabs and translates all new mailing list postings to articles in gmane.org's news spool. If someone who is not a mailing list subscriber, such as you, duly joins Gmane (the "proving you're a real human being not a spammer" bit -- a delightful turn of phrase, by the way), and then follows up an article in newsgroup "gmane.org.user-groups.linux.cabal", a different script grabs your article from the spool and generates an e-mail to the mailing list address -- where it's held for listadmin attention as a non-subscriber post. I as listadmin would undoubtedly respond by approving the post manually, and adding your address to the roster of non-subscribed addresses whose postings will be auto-accepted anyway in the future.
I suspect something similar happened with your Gmane-based posting to php.general: Either it was merely manually approved by a listadmin and that's it, or it was approved by such a listadmin and then your address was cleared for subsequent acceptance.
From J.Bakshi
Answered By: Thomas Adam, Mike Orr, Jimmy O'Regan
Hi,
some of you already have done some experiments with different window managers . I request you to share your experience. I was a KDE user.
[Thomas] You're asking for a fight, yes? Joydeep, I/we have already pointed you to past articles in the past, about different window managers. Heck, I even published it in TAG for you. Here it is:
http://linuxgazette.net/114/tag/4.html
(hint: look at the footnotes, and the URLs therein)
Thanks for the link. I am also grateful to both Thomas and Mike Orr to let me/us know about their valuable experience about different WMs as well as their positive as well as negative ( I should use less feature) sides. specially Mike's reply makes me sentimental for going back to KDE -:))) . but at the same time I have a question in my mind ( may be a common question ) that we generally, I repeat generally don't go for a H/W up-gradation frequently. I used my first self-assembled PC for roughly 10 years. KDE becomes gradually more hungry as a result fatty too -:) . so it may happen that KDE makes your 2/3 years old H/W ( cpu, RAM) obsolete . and then you may fell switching over to other small and fast WM to continue with your existing H/W. only KDE themselves allow you to continue with KDE by providing fast and a little fatty code.
[Thomas] Well, yes. My own opinion is that you shouldn't have to upgrade your PC from a few years ago, to satisfy the working of KDE (or some other application.) If it is programmed well, then it ought to handle lower-end system just fine. I suspect it's a trade-off though between users wanting more features, and the fact that the developers of KDE themselves have a lot of new shiny hardware, such that they're not aware of the slowness issues on older hardware.
definitely they have latest H/W , so they don't bother about the older -:(. but the term latest depends on time factor. those who have latest H/W have no problem to run KDE. and soon their H/W become older with respect of KDE . so if KDE doesn't control its intention to become more hungry and fatty , it gets new users who own latest H/W at the same time loose its users who had latest H/W once with respect to KDE and are not able to do frequent up-gradation for KDE.
[Jimmy] "The trend in desktops, across all operating systems has been to continuously add features and graphics with each new release. Unfortunately, cool icons, animation and complicated multi-paned desktops have usually required increasingly capable machinery. For various reasons, Linux desktops seemed to have suffered less from this performance crunching bloat than other packages, such as Windows XP.
KDE 3.1 has actually reversed the trend. To prove my point, I loaded it on an antique 133 Mhz. Pentium desktop machine. The box had 128 MB of RAM, 256K of L2 cache, a 2.5 GB disk and Debian. Even though KDE took about two and 1/2 minutes to load, most of the programs, menus, icons and animations seemed to appear almost instantly and ran without a hitch."
http://www.linuxplanet.com/linuxplanet/reviews/4676/1
KDE developers do know about the slowness of KDE. Heck, back in 2001 Waldo Bastian wrote this paper (http://www.suse.de/~bastian/Export/linking.txt) about part of the reason behind it (ld.so isn't great at resolving C++ symbols) and wrote a module for KDE to work around it (http://webcvs.kde.org/kdelibs/kinit). Michael Meeks is currently doing similar scary things with OpenOffice: "Spent some time examining C++ symbol table creation in some detail; slightly amazed to see the amazing C++ vtable construction inefficiency in terms of bulk of symbols; GObject for all it's failings is extremely symbol efficient in that way." (http://www.gnome.org/~michael/activity-to-2004-06.html)
My local LUG in India has reported me that KDE 3.3 is more faster than its previous version. so hope for the best -:) any feed back is welcome
recently I have switched over to icewm. icewm runs comparatively faster than kde -:)
[Thomas] Of course it runs faster -- it's a WM and not a DE. That, and DEs tend (not always, but in the case of KDE it does) load up core libs that are needed to run a small part of KDE (or sometimes a large one) that you might never use. Then there's konqueror -- the libs for that are much the same. So you can see how the memory factor escalates as time increases. You want to know how I got on the "straight and narrow", eh? Well, read on:
Slackware 2.0: Twm
This was the first ever distro I used. I has absolutely no idea what I was doing, but I do remember being able to get XFree 3.x up and running, having spent a few months prior to that using the console, as I really didn't know any better.
When X started, up popped twm. Coming from a Windows background at the time, I thought that maybe I had broken my computer. Whilst something had obviously loaded, it was not what I was expecting. The menus were not very well defined, and for me to see them, I had to click and hold the mouse button. Hardly user-friendly...
[Jimmy] IIRC, I've only used TWM three times: first, Summer '98, when trying out VNC for the first time (with the Java applet), then again researching an article about CoLinux, using Cygwin, then a few weeks later with that setup, trying out Metasploit: injecting a VNC server using an RPC vulnerability, and viewing the results from the VNC viewer on CoLinux, viewed through Cygwin's X server (I was able to see it worked by the window-in-window view :)
[Thomas] ... or so I thought. But at the time, I just assumed twm was all there was. I got to know it, and tweak things about it. Surprisingly enough, twm is more customisable than people tend to realise. Ok, so I know the default look-and-feel of it is enough to make anyone vomit (the green really does remind me of the sea), but if one goes digging about ~/.twmrc, it becomes obvious of the things one can change about twm. I started off by changing the colour, to a deep red. Since the foreground colours were white, the contrast this gave was pleasing to me.
Twm also has an icon manager. I didn't know this was the name of it, and why should I? Whenever I iconified a window, it just appeared on the root window as a text label almost. But even that's customisable... The end result, was that although very primitive, it functioned, and I was happy with it.
I ended up staying with twm for about a year. I didn't know any different, and it was only after leaving Slackware, for this then new distribution RedHat, that things changed slightly... RedHat 5.0: KDE
[Jimmy] That was my first distribution! February '98, in college.
[Thomas] Anyone reading this that knows me, will laugh at the thought of me ever trying KDE, let alone using it. But it's true, one might say, almost unavoidable that I would use it at some point. Of course, I'm referring to KDE 1.x the version of KDE that actually worked. My main PC (up until very recently) was a P166. KDE 1.x actually ran on it at a not unreasonable speed, all things considered. KDE was still in its infancy compared to the over-weighted and monsterous beast it has become today).
But (and you can quote me on this) I liked KDE 1.x. I liked its style, and speed, even on a P166 and that matters when that machine was old, even when I was using it. Compared to the twm jail-house I had been in previously, this thing was immense.
Sure, I had a few issues with it. I remember distinctly that the icons on the desktop were tempermental. One moment they'd work, and the next moment, they wouldn't, or they'd disappear completely, etc. But I didn't let this bother me too much. It was useable. It worked very much like windows, which meant I didn't have to think about what I was doing -- KDE made all the choices for me.
I then switched distros a lot thereafter. I tried GNOME, and various other WMs, before settling on FVWM -- actually, this was on RH4 using their 'AnotherLevel' package. (The Lesstif theme)
[Jimmy] And that was my first WM (though with the FVWM95 default theme). Then Afterstep, Enlightenment+Gnome/KDE/Afterstep, Sawfish+Gnome/KDE/Enlightenment/WindowMaker, KDE/Gnome, now Gnome with whatever it uses now -- Metacity, I think.
[Sluggo] You used KDE before FVWM? That's a trip.
[Thomas] Oh no. I used TWM before I used anything else -- by "settled", I meant that I went back to FVWM having tried KDE. I guess the ordering was too vague in my description.
[Sluggo] Although I guess now that KDE is the default on many systems, it's pretty common.
[Thomas] It is more common now than it used to be. For years RH (through releases 4-6) pushed FVWM into the forefront. Indeed, the classic "desktop" competition that RH held in 1996 spawned the (in my eyes) infamous AnotherLevel theme. How I loved it. It was only very recently (say RH 8 onwards) that used KDE as the de facto upon installation. Although now that GNOME is the GNU desktop, that has seen its way to the top on some distros (Ubuntu is noticeable for this.)
[Sluggo] The first encounter I had with a GUI (besides Macintosh) was when the university computer lab replaced its h19 terminals with X-terminals. I thought it was a ridiculous waste of money at the time. The terminals came with a built-in window manager (Motif) and a menu to telnet into the frequently-used hosts.
[Thomas] Hehehe, what a cool thing to do. Bet it was slow though?
[Sluggo] Slow, no. What do you mean? All the hosts were on the campus network, which was no slouch.
[Thomas] Ah, I see. I just had problems trying to picture the scale of it in my mind -- whether the more users were on it at anyone time might have meant things were fairly slow (at University here, you can always tell when there's too many people logged on. :))
[Sluggo] This was the University of Washington, so pine was encouraged. But you could do xdm login to a BSD/ULTRIX host and get twm. That's what the cool people did.
[Thomas] Nice bit of history you've detailed there, Mike. I'd really love to see an article of how Unix was used in the past. Now, with PCs so cheap and popular, it's all the same method. You don't often hear of people saying they're using a dumb-terminal (unless they have an XDMCP server running.)
[Sluggo] The tradeoff was it took more CPU time on the host, so after eight hours you got nice'd and things slowed way down. That was late 1990. Most of what is now blogs and chatting and eBay was done on Usenet. I'd go down Friday evening to read news and get home Saturday afternon. We didn't really use the GUI at all except to set the background image and run a clock.
A few people played MUDs.
[Thomas] If you ever get the time, I would encourage you to write an article about it -- things like that fascinate me.
[Thomas] You don't have to look very far for the WM I use. Just look at the recent articles I have written, as well as recent TAG entries from a few months past. Indeed, you can see my config file on the fvwmwiki:
http://www.fvwmwiki.org
... as well as other examples on the fvwm-forums:
http://fvwm.lair.be
my H/W config is AMD semapron 1.5GHZ with 128MB DDR , 256KB L1, UDMA HDD . I have also tested fluxbox which is based on blackbox and slower than icewm.
[Thomas] The speed issue is largely dependant on a number of things, Joydeep. Typically, it could be that you're running extraneous applications along with the window manager you're loading, or it might be that the WM itself is loading them, providing you with the default theme it ships with (Englightenment used to do this -- no wonder I hated it.)
Indeed, I often get asked this question about FVWM -- and why that's so slow. Typically in FVWM's case, it can usually be attributed to the over use of colorsets. Colorsets (in FVWM parlance) are a means of specifying (in a lot of detail) the colours that can be assigned to various parts of a window (and all aspects thereof) as well as background colours. Colorsets can be told to use gradient colours, and the like, that, on lower-end systems take up a lot of memory.
Also, colorsets when used with transparency can cause a slow-down, due to the way the Xserver handles it (remember that colour management is the responsibility of the Xserver.) You can tell colorsets to buffer the transparent images as well -- this can be slow. Where colorsets use pixmaps (and other graphics colours as opposed to single block colour) then more memory is used. Indeed, when specifying colorsets in FVWM the numbering is important. If one were to use:
Colorset 100 fg white, bg blue
Then in memory, there would be allocated space for 101 colorsets, irrespective of whether they've been defined or not -- so keeping the numbers low, in their definition is often important. (I'll be discussing colorsets in FVWM properly in an article.)
But I digress.
There are of course, some general things one can try to reduce slowness. It's not so much of a problem on 2.6 kernels (as the scheduler is much better), but the Xserver used to be reniced when it was running so that it was more responsive -- but such things waere only ever negligable at best -- and renicing a process is NOT a general ideal to making things run faster -- but for really low-end systems, it might.
Going back to the subject of colour, running one's Xserver with a low colour-depth is advisable, something like:
startx -- -bpp 16
Which can reduce memory consumption. Note that colour-handling in FVWM (and colour limiting, where it is applicable) is now done automatically in FVWM where those applications try and steal the entire colour-pallette (netcape 4 used to be notorious for this.)
[Sluggo] I suppose you mean where the window has a different palette than the window manager, so everything around the window turns psychedelic colors when you move the mouse into the window.
[Thomas] That's the one, yep. I'm glad you're aware of the difference between a colour palette (of a window) and that of the overall Xserver. :)
[Sluggo] That was cool too. We used to do it on purpose just to watch the colors change. (Stop snickering, Ben.) It wasn't just Netscape. I think xloadimage (an image viewer that comes with X) did it too. It's not "stealing" the palette.
[Thomas] Indeed not. Where a palette is "stolen" what tends to happen is that the application defaults to black and white.
[Sluggo] It's just a fact of life that if the image has a different palette than the environment, one of them will have the wrong colors. The program rightly assumes that if the mouse is in its window, you want to see that image properly. Of course, now that everybody's video card has enough memory for 16 million colors, palettes don't matter anymore.
[Thomas] I'd agree, except for the fact that palettes do matter, as they're ultimately affected by the overall colour-depth the Xserver runs in. I don't run mine in so-called "true-colour". Mine's in 16 bit (sometimes eight.) But then I don't have a great graphics card... :)
[Sluggo] I don't remember if I first used FVWM before or after Linux (1993),
[Thomas] It wouldn't have been before -- FVWM 1.24 (?) was released in 1993.
[Sluggo] but it was my second window manager and I used it for years. Used WindowMaker for a while, tried a bunch of others. At work (1998) I used KDE so that any unfamiliar thing I might need was under the menus.
[Thomas] I think a lot of people that used Linux early on, followed this path. KDE despites all its faults now, really was something of a revolution back then. In fact, as I said in my commentary before, I actually liked KDE 1.X -- it really did look promising. And then they added cruft to it...
[Sluggo] That seemed better than wasting time finding something equivalent and installing it. Then I switched to KDE at home too, and Thomas knew I was lost.
[Thomas] Don't worry, you'll come back to your roots eventually. :)
[Sluggo] For speed, there are pretty much three classes of window managers/desktop environments:
- KDE, Gnome, Enlightenment: slow.
- WindowMaker, fluxbox, uwm, and a dozen others: much faster.
- FVWM, twm, wm2: slightly faster still.
- larswm, ratpoison: fastest.
[Thomas] That's a fair comparison. Mind you, when you wrote it, what was you thinking when categorising by speed?
[Sluggo] The delay between when you initiate an action and it happens. KDE opens and closes windows noticeably slower than FVWM.
[Thomas] Ah, I see. That might be to do with how the windows are decorated in KDE. But I can't say for sure (typically what happens is that the WM will grab the window before it is mapped visibly onto the screen -- and then apply the decorations. The details of which I am not about to delve into. :P)
[Sluggo] You'll notice a direct tradeoff between speed and features, and between using xlib directly vs a widget library. But xlib programs also look like crap and have idiosyncratic input behavior (whether you can tab between fields or press Enter for OK, whether you can replace text by selecting it, etc), so there's another tradeoff.
[Thomas] Umm, yes, but the speed of the window manager (at the root level) depends how it is implemented. Most WMs have to be written in Xlib if they have any chance of running smoothly. But the speed of the WM doesn't care for what widget set the application was written in.
Well, let's see. Most of the x* apps that are bundled with the XFree86 release are written using the Xt widget set (one level of abstraction up from pure Xlib.) Yes, they might look ugly, but there's some things you can do to "tart" them up.
X apps should (where they've been programmed properly) respond to resource settings. These are values held on the Xserver that makes customisation of the application's widgets available to other uses of the program. When the Xserver loads, it will often process the files stored in:
/usr/X11R6/lib/X11/app-defaults
... the program responsible for that is 'xrdb'. Indeed, like any command one can customise it for theirselves. Whereas most commands when they load look for a config file in $HOME to use instead of the global-wide one, xrdb doesn't since one can use multiple files for different programs. Instead, one can use a ~/.Xresources file or a ~/.Xdefaults file (they're the same file -- one is often a symlink to the other.) As to what they can contain, let's take an example.
xclock.
That's a simple application. If we wanted to, say, tell it to use a red background, then to our ~/.Xresource file, we could add:
xclock*background: red
... and note that the syntax. The "*" matches anything after xclock in the resource name, whereas, I could have used:
xclock.background: red
... which would have stopped the matching at the full word 'xclock'. The colours are looked up via the RGB.txt file, so as long as the colour names are listed there, things will be fine.
Note that it's traditional amongst resource names to be lower case. Indeed, the resource name can be ascertained via the 'xprop' command (or FvwmIdent module, should you use FVWM.)
Going back to the xrdb file though -- save it. In order for those values to be applied to the server, we have to tell xrdb to load the file. The command that should be used is:
xrdb -merge ~/.Xdefaults
... indeed, that does what you think it does. If you were to use: "xrdb -load ~/.Xdefaults" --- that would work, but you'd overwrite the global definitions stored in /usr/X11R6/lib/X11/app-defaults -- so -merge is always the best one to use. I also add this to my ~/.xsession file.
[Sluggo] Another question is memory. Your computer (1.5 GHz, 256 MB) is passable for KDE, but another 256 MB would be more comfortable.
[Thomas] Indeed. X11 by itself takes up a fair chunk.
[Sluggo] I tried wm2 and other minimal window managers, assuming they would use less memory than FVWM, but they didn't. So FVWM is amazingly efficient given its wide configurability.
[Thomas] It's fast because a lot of the more "niftier" features aren't part of the core FVWM -- they're modules. All that's in the FVWM core is enough to get FVWM useable, without bloating it out.
[Sluggo] And the speed it pops up windows and restores them is about as snappy as you can get without losing the "overlapping windows" paradigm.
[Thomas] There's lots you can do about that. Overlapping windows can be a thing of the past, depending on the window placement policy in use. In FVWM parlance:
style * MinOverLapPlacement
[Sluggo] No, I mean losing the capability of overlapping windows.
[Thomas] Ah, that's err, rather a big difference from what I was rambling on, earlier. :)
[Sluggo] Most Unix, Mac, and Windows users are used to moving a window anywhere and letting other windows partially sit on top of it.
[Thomas] Yup -- I do it all the time under Windows at University. Alas, it's the only way of working.
(At University, I usually embarrassingly keep typing in my Linux username and password that I'd use at home, and then getting frustrated when the netadmins ask me who "n6tadam" is, and why am I trying to "crack their account".)
[Sluggo] Windows 1.0 had only tiled windows, meaning a window could never cover another, and they all covered 100% of the screen so there was never a background peeking through.
[Thomas] I liked this way of working (IIRC, the same paradigm was true under Win 3.1 with progman.exe)
[Jimmy] Not quite. It was the default for each application to use the full screen, but it was possible (if damn near unworkable) to have multiple programs overlapping.
[Sluggo] larswm returns to this as a way to maximize productive space and performance, although it also allows some windows to "float". Normally you'd use tiled windows for xterms, editors, your clock/biff/meter panel, etc, and floating windows for graphical applications like Firefox and the GIMP.
[Thomas] That sounds like an interesting way of working, actually. Was that the default way of working under larswm? (I'll try it under FVWM and see if I like it. I currently use 'MinOverLapPlacement', and although I have 9 pages avilable, they almost all look too cluttered, and feel it in operation.)
[Thomas] ... tries to achieve that. That will try and place windows in a linear fashion filling up as much screen as it can, as in:
[window1 ][window2] [window3][window4][win5] [ window6 ]
... one can set placement penalties -- to place windows away from others, based on the edge of them and such like -- I don't want to go into too much detail as it can get boring.
Other placement policies one can use is: "TileCascadePlacement", that works like MinOverLapPlacement, but will start tiling window, when all of the available screen space has been used. There's countless others...
[Sluggo] I'm mostly satisfied with KDE. Of course, everybody wishes it started up faster and opened windows faster. But the features, themability, and integration is a good tradeoff. As the KDE developers have pointed out, KDE is "bloated" only in the sense that it has more features.
[Thomas] Sure -- but how useful are they? :)
[Sluggo] You haven't spent years struggling to get programs to input and display non-English characters. Every program did it differently, you needed special fonts, and some programs didn't do it well at all.
[Thomas] Oh, I can well imagine though. It's bad enough now (although considerably easier, I'm sure.)
[Sluggo] Now with KDE you just click the keymapper icon in the bottom right and the keyboard changes language. I don't use that feature -- I got so burned out that I do everything in ASCII now -- but it's nice that it's there without having to dig through a HOWTO and configure a bunch of programs.
[Sluggo] But the features are implemented efficiently, compared to running equivalent programs standalone. konsole with one session uses more memory than xterm, but konsole with two or three sessions uses less memory than two or three xterms, and it has convenient tabs and font selection. FVWM can't be beaten for manual window placement (why doesn't KDE have that?), and it's more intelligent about not giving new background windows the focus when you don't want it to. On the other hand, it only has one flat list of windows per desktop, whereas KDE groups them by application. The latter is important when you have fifty GIMP windows open, a couple konsole sessions and a few Firefox windows. And FVWM can show windows without decorations -- e.g., biff and transparent oclock -- and you can use the root menu to move them or kill them. But you have to set up the menus yourself (unless you're using Debian, which has a WM-independent menu configuration). But I like KDE's panel and klipper and grouped window list, so that usually clinches it for me.
openbox is another WM performs well but lacks of theme . UWM ( unix window manager) with UDE ( unix desktop environment) though based only on xlibs to speed up but I have found hardly any speed difference between icewm and UWM ( with UDE ).
[Thomas] Any good WM can only be written in pure Xlib (as FVWM is.)
fvwm is there but setting its configuration is terrific .
[Thomas] "Terrific"? That's a new one. :P
I have not tested the enlightenment window maker yet.
[Thomas] Enlightenment and "Window Maker" are two different programs.
If any one use any window manager faster than icewm (and configurable like icewm ) please share the experience.
[Thomas] That's a matter of personal bias -- and indeed, see my ramblings above.
[Sluggo] But speed isn't the only criterion. If speed is the only thing you care about, try one of the minimalistic window managers like larswm (http://home.earthlink.net/~lab1701/larswm) or ratpoison
Google have released a new chat program. It's Windows-only, but Google being the cool people they are, they've gone with open standards (Jabber) and provide instructions for Gaim (http://www.google.com/support/talk/bin/answer.py?answer=3D24073) and Psi (http://www.google.com/support/talk/bin/answer.py?answer=3D24074).
[Jason] This hit Slashdot, so everyone's probably seen it already, but Google's use of Jabber is apparently not quite as open as one might hope:
http://www.livejournal.com/users/nugget/97081.html
[Rick] Well, opening up gateway services carefully and slowly, to control spam and other service abuse, is only reasonable:
............... We look forward to federating with any service provider who shares our belief in enabling user choice and open communications. We do believe, however, that it is important to balance openness with ensuring that we maintain a safe and reliable service that protects user privacy and blocks spam and other abuses. We are using the federation opportunity with EarthLink and Sipphone to develop a set of guidelines by which all members of the federated network can work together to ensure that we protect our users while maximizing the reach of the network. [...] ............... |
http://www.google.com/talk/developer.html#service_4
Seems reasonable to me. On a related note, I bought Linux Format today and was amused to see this:
"The award of 15 [Summer of Code] bounties to the Gaim project led to
speculation that Google has its sights set on integrating instant
messaging into its search portal."
http://blogs.sun.com/roller/page/webmink?entry=addressing_proliferation_deeds_not_just
Specifically, this means that OpenOffice.org is now LGPL only. They are also asking OSI to not recommend its use.
http://politics.slashdot.org/politics/05/09/01/122247.shtml?tid=109&tid=219
http://www.forbes.com/business/feeds/afx/2005/08/31/afx2200406.html
http://www.theinquirer.net/?article=25845
(Warning: 3rd link crashes Firefox)
Massachussetts decrees all state documents must be in PDF or OpenOffice formats starting in 2007.
[Jimmy] Microsoft are (surprise, surprise) upset with this decision. Alan Yates wrote a series of "concerns" about the decision.
Tim Bray (creator of XML) took the time to reply to some of these concerns, though this, perhaps, sums it up best:
............... Recently we spent a few days on a farm on Saskatchewan, during which I had occasion to help clean the floor of the barn, one of whose inhabitants was a Hereford bull named "El Presidente", being boarded for a friend. So, when I assert that these talking points are, by and large, Dung of Male Bovine (DoMB for short), I do so in an educated voice. ............... |
One of the most commonly questioned "concerns" is that of the difficulty of changing formats:
............... Yates raises the spectre of "enormous costs" facing Massachusetts in a file format switch. This is rich. Yates' hyperbolic sense of moment is unevenly apportioned. He omits the enormous cost of a possible Microsoft upgrade to Office 12 one day, should Microsoft be so lucky. ............... |
(http://business.newsforge.com/article.pl?sid=05/09/21/0811222&from=rss)
The KOffice team were also keen to correct Yates' assertion that "[the] four products that support the OpenDocument format ... are slight variations of the same StarOffice code base": http://dot.kde.org/1127515635
In an interview at OnLAMP.com, Richard M. Stallman discusses (among other things) version 3 of the GPL.
Though details of GPL 3 are sketchy, it has seemed certain that it will contain a 'web service' clause: licences such as the Affero GPL and the APSL have clauses that require the distribution of source code for modified versions of software covered by those licences if the software is used in public servers.
These clauses are considered non-free by Debian, because this affects the use of the software. RMS's comments show that he has considered this.
............... Running a program in a public server is not distribution; it is public use. We're looking at an approach where programs used in this way will have to include a command for the user to download the source for the version that is running. But this will not apply to all GPL-covered programs, only to programs that already contain such a command. Thus, this change would have no effect on existing software, but developers could activate it in the future. ............... |
Peru has passed a law that requires the use of free software in government departments.
............... Article 1 - Objective of the law Employ exclusively free software in all the systems and computing equipment of every State agency. Article 2 - Scope of application The Executive, Legislative and Judicial branches, as well as the autonomous regional or local decentralized organisms and the corporations where the State holds the majority of the shares will use free software in their systems and computer equipment. ............... |
(Google translation of an article, Text of the bill from opensource.org)
http://lwn.net/Articles/152628
A review of Rockbox, an open-source firmware for Archos digital music players. They are also porting it to some iRiver models, although this isn't finished. The article also discusses GP2X, a handheld video-game console with open specifications. It runs Linux, although it's unclear whether the distribution will be DRM protected.
There was a lot of fuss in Australia when several companies who are in some way using "Linux" were contacted by Jeremy Malcolm, a lawyer acting on behalf of Linux Australia Inc., demanding that these companies agree to a trademark licence and pay a fee of between A$200 and 5000.
Although it was widely reported that this was somehow an attempt by Linus Torvalds to cash in on the success of Linux, in this post to the Linux kernel mailing list, and after a brief explanation of trademarks, Linux said this:
............... Finally, just to make it clear: not only do I not get a cent of the trademark money, but even LMI (who actually administers the mark) has so far historically always lost money on it. That's not a way to sustain a trademark, so they're trying to at least become self-sufficient, but so far I can tell that lawyers fees to give that protection that commercial companies want have been higher than the license fees. Even pro bono lawyers chanrge (sic) for the time of their costs and paralegals etc. ............... |
(LMI is Linux Mark Institute (http://linuxmark.org): "LMI is not designed to generate profits for anyone, which is why Linus Torvalds has given LMI primary sub-license rights for the mark.")
In the end, however, the trademark was ruled invalid in Australia:
............... In a letter dated 31 August addressed to Perth-based lawyer Jeremy Malcolm, who represents Torvalds, Intellectual Property Australia official Andrew Paul Lowe said: "For your client's trademark to be registerable under the Trade Marks Act, it must have sufficient 'inherent adaptation to distinguish in the marketplace'. "In other words, it cannot be a term that other traders with similar goods and services would need to use in the ordinary course of trade." ............... |
(http://www.zdnet.com.au/news/software/soa/Linux_trademark_bid_rejected/0,2000061733,39212227,00.htm)
Sun Microsystems announced that they will be making Project DReaM ("DRM/everywhere available") available under the terms of the CDDL (the licence used for Open Solaris) as part of their Open Media Commons initiative (http://www.openmediacommons.org), aiming to provide an open standard for DRM.
Sun's Jonathan Schwartz spoke about the project:
............... The issue at hand is fair compensation without loss of fair use. The Open Media Commons is committed to creating an open network growth engine, all the while continuing to protect intellectual property in a manner that respects customer privacy, honors honest uses of media, and encourages participation and innovation. ............... |
(http://www.sun.com/smi/Press/sunflash/2005-08/sunflash.20050822.2.html)
According to Wikipedia (http://en.wikipedia.org/wiki/Project_DReaM), Project DReaM consists of three parts:
...............
............... |
DRM Watch has more information about DRM-OPERA:
............... DRM-OPERA's roots lie in Project OPERA, which came out of the Eurescom R&D initiative sponsored by the European Union and the European telecommunications industry. Sun's R&D lab contributed heavily to Project OPERA, which produced an architecture for interoperable DRM in 2003. OPERA achieves interoperability among DRM systems -- interoperability between Microsoft and RealNetworks DRMs has already been demonstrated -- essentially by reducing DRM licenses down to a lowest common denominator of authenticating users only (as described above) and providing "play once" as an atomic licensing term that all DRM systems can understand and support. Each of the DRM systems involved in a specific instance of interoperability can manage more complex licensing terms internally and communicate them through the OPERA architecture via "play once" licenses. ............... |
(http://www.drmwatch.com/special/article.php/3531651)
It's important that they're making this effort, but the proof points will occur when the rights holders and device makers get on board.
http://www.perl.com/lpt/a/2005/09/22/onion.html
Larry Wall talks about the current state of affairs in the Perl world, using spies as his analogy:
............... Anyway, now that I've been wading through the Bond corpus again, I've noticed something I've never noticed before about the show. It's just not terribly realistic. I mean, come on, who would ever name an organization "SPECTRE?" Good names are important, especially for bad guys. A name like SPECTRE is just too obvious. SPECTRE. Boo! Whooo!! Run away. You know, if I were going to name an evil programming language, I certainly wouldn't name it after a snake. Python! Run away, run away. ... Everyone my age and older knows that Five-Year Plans are bad for people, unless of course you're someone like Josef Stalin, in which case they're just bad for other people. All good Americans know that good plans come in four-year increments, because they mostly involve planning to get reelected. I probably shouldn't point this out, but we've been planning Perl 6 for five years now. Comrades, here in the People's Republic, the last five years have seen great progress in the science of computer programming. In the next five years, we will not starve nearly so many programmers, except for those we are starving on purpose, and those who will starve accidentally. ............... |
http://beta.news.com.com/Microsoft,+JBoss+link+server+software/2100-7344_3-5883498.html
............... Specifically, the companies expect their collaboration to achieve interoperability in several domains:
............... |
With a deluge of product announcements and a heady debate on the future of software patents, the Linux World Conference and Expo (LW2005) sailed into San Francisco in early August. Debian announced its intention to go more commercial, and Darth Vader and representatives of the Evil Empire up north made an appearance as well.
This was fun as always, but some of the flavor of past LW conferences was missing.
To some extent, this was less a Linux World conference and more an Open Source Software conference, with many major companies showing off their Open Source credentials. Just look at the keynote presenters, starting with Oracle, HP, and IBM, (See the conference website) then there were the Red Hat and Mozilla presentations. Certainly there were good panels, like the one OSDL presented on software patents, but the 500 pound gorillas dominated.
Of course, IDG, the conference organizer, did say that the "prevailing theme of this summer's conference program will be Linux in the enterprise." They did emphasize that the "conference program will illustrate how enterprises are reaping the business benefits of Linux and Open Source," so there were presentations on Grid offerings and on calculating the ROI of using Open Source Software in the Enterprise.
Many alumni I spoke with noticed the change in emphasis and were wistful for the older, Linux bad-boy attitudes and the give and take of technical presentations where peers openly disagreed. To be sure, there was some of that, but there were more suits and better shoes at LW2005, and that boosted attendance at both the conference and the expo. Perhaps this is the inevitable fallout of getting corporate elephants to dance...
Linux World had an opening day of tutorials organized into 1 or 2 sessions of 3 hours, followed by three days of conference sessions and the Expo. I did see a few folks who went just to a tutorial and then the Expo, even though the keynote speeches were open to all.
The topics spanned included Samba and its administration, hardening Linux systems (taught by Bastille Linux creator Jay Beal), a Linux cluster hands-on lab, Xen and other Virtualization Techniques, a session on the Novell kernel debugger, and a detailed look at Google APIs and utilities. There was also a 2-part Linux system administration track. Each tutorial had a book of slides and some additional reference material. Since they were priced fairly, this was a good deal.
Jay Beal's system hardening materials were based on material from his security talks website, which includes links to an earlier BlackHat presentation on locking down Solaris and Linux that was shrunk a bit for his LW2005 tutorial. Of course, many of his points are incorporated in the Bastille Linux scripts, which also offer good explanations for its many security options. Use those scripts and check out the security talks link.
The tutorial for web developers on using Google utilities and APIs was taught by Deryck Hodge of the Samba team. It covered custom searches, using the Google API as a Web Service with Perl and Python, gcli as an interactive "shell" for Google searches, GMail tricks, XML parsing, and building custom maps for display. He promised to have "code and tutorial stuff" available on his updated website.
There also was a 3-day, conference-long session on Linux System Administration in a large room that was open to any full conference participant. That tied in very nicely with the 'free' (as in beer) Linux certification exams offered by LPI, the Linux Professional Institute — Expo attendees were offered the exams for only $25 instead of the usual $100, so a newbie could (with some outside reading) attend part or all of the 3-day session and have a good shot at the basic certification.
Several of the security presentations were led by Jay Beal (who presented the Linux hardening tutorial), but many other presenters rounded out the topic. The first session, "Linux Security Report", was presented by Coverity creator Seth Hallem. He advocated using static code analysis to find as many potential bugs as possible, and his tool (Coverity Prevention, a spinoff from a grad project at Stanford University) is very good at finding those static bugs. His main point was that good static analysis will run down more code paths than test cases can, and will check all variables, working toward a goal of zero compromising defects. Large software vendors like Oracle, Cisco, and Sun seem to have joined the Coverity static analysis camp.
He presented a lot of statistical data: security vulnerabilities go up with more code (but Linux seems to mostly reverse this), developers spend about 40% of their time on reworking buggy code, the vast majority of security flaws are from detectable bugs not caught by limited test cases, etc. One key result: Linux was very good in comparison with commercial software of its size and complexity.
Another key result: over half of the bugs in Linux were in the device drivers, with only 1% in the kernel (and most of these due to the addition of new features). That's especially good, since many Linux installations will not use the problematical drivers. They discovered 1,008 defects in other parts of version 2.6.12 outside of the file system and kernel. Below is a pie chart of those defects.
Two slides about defects in Linux code (security vulnerabilities go up with more device driver code):
A white paper on bugs in the Linux kernel is available here (requires registering with Coverity).
Among the surprises at LW2004 was the appearance of Microsoft in sessions and in the vaulted Golden Penguin Bowl contest. This year had Microsofties competing with Google geeks, who won handily, but the crew from Redmond performed fairly well, actually listing almost as many UNIX variations as the winning Google team in the final round. In the spirit of the occasion, the MS trio came dressed as 2 Imperial Storm Troopers and Darth Vader (where did they get those great outfits?!!).
That was fun, but is this the proverbial "wolf in sheep's clothing" syndrome? Is MS trying to put on an Open Source party hat while surrounding and assimilating Linux as just a better Unix that's easier to manage with MS tools and utilities? Or are they genuinely turning over a new leaf?
Bill Hilf, Director of Platform Technology Strategy at Microsoft, presented one of the more surprising sessions, S42: "Managing Linux in a mixed environment... at Microsoft?: A look inside the Linux/Open Source Software lab at Microsoft". His biography showed a long history of open source credits, including being a senior IT architect at IBM and leading Linux technical strategy for their global markets organization. He was one of the storm troopers, answering most of the Penguin Bowl questions. Unfortunately, his on-line LW2005 slides consist only of the title slide. He was also interviewed the day before on Slashdot.
Hilf said that MS wanted a real expert on Linux and Open Source software, "...not someone to kill the penguin." He also noted that his team regularly uses "over 40 different flavors of Linux and BSD, plus several commercial Unix" products.
Hilf spoke the language of the audience and certainly had the right credentials; it felt like a peer was addressing the audience. Hilf gave us spins and feeds, diagrammed a very large test data center of over 300 clients and servers, and showed us real Linux servers running Open Source software at the evil empire. He also explained how reluctant MS co-workers were eventually won over (after uncovering SMB errors with a Samba SMB Torture suite) and became interested in his lab as an integration test bed. But he gradually tempted everyone present with current or beta integration tools based on MS tech, especially the MS Virtual Server. He said it was a full product (minus a box) worth $1000 and most attendees made sure to get the software as they left.
His last (and most persuasive) effort, was a demo displaying the ease of scripting in Monad/MSH, a kind of .Net enhanced Bash shell with POSIX features. And it's good, very good. Its a familiar shell environment that also leverages the .Net infrastructure and can control services in both the Linux and Windows spheres. It's also a trail of honey to win some mindshare among system administrators and younger Linux users. Here's more on MSH.
All this will make it easier to integrate Linux and Windows - in fact, any Unix work-alike - yet it seems specifically planned to favor managing the integration from the Windows side.
After LW2005, at the Intel Developer Forum in San Francisco, Microsoft announced its upcoming Virtual Server upgrade would support the virtualization of both Linux and Solaris operating systems on Windows servers. Is this a new strategy to surround and 'assimilate' Linux? (Remember that magazine cover before the Web showing Bill Gates as a Borg? What will another year or two bring?)
Just after LW2005 there were articles about a potential deal with OSDL and Microsoft to do so-called independent research on the prospective advantages of Windows and Linux. Was this the camel getting its nose in the tent, or is this just a continuation of the rapprochement started with Sun in the last year? After all, back in July, OSDL leader Stuart Cohen said he could "...see Microsoft participate in software that runs on top of Linux in the future."
Is a deal with OSDL good for Linux and the community? While truly independent research helps us all, there is probably no way that the results would not be exploited by Microsoft (and they probably expect to pick up some points with their latest version of WinServer). As I finish this article, there seems to be a complete rejection of the joint study by OSDL. It seems that OSDL asked MS to release its Office Suite on Linux as a test and MS balked. Figures...
One commenter opined that having a common core (supported by less than 70% of the Debian organizations) would not actually create more Debian forks, but also may not significantly reduce the current number of Debian forks.
Ian Murdock, co-creator of Debian Linux, introduced the the DCCA at a press conference on the first day of LW2005. Key among the goals for the first common core release is full Linux binary compatibility, which means supporting the LSB Linux Standards Base (DCC will be compatible with LSB packages which use the RPM format by using the 'alien' tool to translate the package into native Debian format, since LSB specifies the format, not the RPM suite). A major secondary goal is having more regular release cycles. DCCA hopes to track LSB releases, roughly an 18 month cycle. Beyond that, the goal is to create a 100% Debian core that can support enterprise business users and their applications and accelerate acceptance of Debian.
DCC is not a Linux distribution. Its better thought of as a software repository with common fixes and updates. Individual distros will use that core and also contribute to it. ISVs would also certify applications to that core. Among the initial DCCA founding members are Xandros, Knoppix, Linspire, Mepis, and SunWah. More Debian distros and some hardware vendors are expected to join by year end.
The first DCC release is expected in mid-September. It will be based on Debian 3.1 and should be certified to LSB 3. DCC version numbers are based on the major version of LSB.
Bruce Perens of Open Source Labs spoke favorably about DCCA at the initial press conference and also the next day at panel of Linux/Open Source software authors linked with the PTR/Perens publishing project. Check out the PTR site for downloadable books.
According to Bruce, the DCCA will provide a way to "certify to a Linux distribution, [while having]... multiple support providers who... differentiate themselves at a higher level up the stack." He also noted that globally Debian Linuxes have 2-3x the installed base of SUSE Linux, and this will expand that base.
I spoke with Ian Murdock just before the article submission deadline, and he said "We're in heads-down mode, working on DCC 3.0, the first version of DCC, based on the 2.6.12 kernel... On the engineering front, it's going well." Ian commented on the partner certification program, "...at the end of the day our goal is to have a core that ISVs, etc., can certify to... An ISV would certify once and that would be it."
Ian added, "Debian is a community project. We are also becoming active as the DCC in other Linux projects. For example, we don't want to define our own standards; we want help LSB to define formal standards that will be used for all Linux versions, such as package-name issues, and then we'll do a specific implementation for Debian. We want to be additive to existing projects, not reinvent anything."
"I'd love to see DCC play a role on the LSB project; standards with implementation associated with them are always more successful than generalized standards. If DCC could become a reference implementation, it would make LSB and DCC stronger and also make the whole Linux world stronger."
Open Source software and the future of software patents were touched on by 2 different keynotes, one by RedHat deputy general counsel, Mark Webbink, and also in a panel moderated by Stuart Cohen, CEO of Open Source Development Labs, as well as the author panel. Webbink spoke about the innovation-crushing effect of software patents and called on Microsoft to refrain from using software patents against users and individual developers. (Software Protection and the Impact on Innovation)
Software patents are more contentious than copyrights and hardware patents, which cover a single object or entity. Software patents typically try to patent ideas, processes, sequences, and even appearances. A pharmaceutical patent would typical cover a single formula or drug family, specified fairly precisely. Software developers run afoul of software patents more often because they are more vague and there have been too many granted for trivial or non-unique 'inventions'.
There currently are over 150,000 software patents, growing at about 10,000 a year. Webbink noted patent searches often cost $5000 each. Searching and analyzing even only 3000 patents would cost $1.5 million. He added that MS alone plans to file 1000-2000 software patents a year.
"What if Dan Bricklin had patented VisiCalc? that would have locked up spreadsheet software through the year 2000." Webbink said many current software patents are not that significant. He noted that MS has a patent for adding and removing white space in a document.
He stridently criticized the patenting of software code alone. Patents should be applied only to unique inventions. He concluded by saying that software patents are used primarily to block innovation by competitors.
OSDL hopes to create a patent repository that will support the Open Source software community and not to restrict innovation. This would be accomplished by major ISVs, like IBM, Oracle, Novell, Red Hat, etc. giving their intellectual property to OSDL or another non-profit to administer.
Perens, at the author panel, noted that "... every large Open Source software project infringes on some patents, but these are bad patents," adding that "we need significant patent reform in the US". He was also skeptical of the OSDL patent pooling idea, as many of those patents may be encumbered by cross-licensing agreements and won't be that useful in defending Open Source software. "This is spitting in the wind", he added, and not reforming existing law.
Diane Peters, general counsel for OSDL, recently offered more detail on the OSDL Patent Commons project. See this report for OSDL's position.
Eben Moglen, Chair of Software Freedom Law Center, spoke at the OSDL panel keynote and supported the project. "OSDL is the ideal steward for such an important legal initiative as the patent commons project... there is strength in numbers and when individual contributions are collected together it creates a protective haven where developers can innovate without fear."
The Expo had a lot of vendors and a lot of freebies, including a program of free sessions on each day. Among the best freebies were Linux penguins, cloth creatures from Sun Wah and Novell, both cuddly. There was also the 10-inch Dilbert doll from EMC. My fiancee loved all of them!
VMWare, as part of its efforts to contribute to the Open Source software community, brought attention to itself by giving full user licenses to VMWare desktop for all Expo visitors who registered at the booth. (but it has been about 30 days and I still haven't gotten my license yet). This one does not expire...
Besides several CDs of software and collateral info, IBM also passed out a flashlight with a bright white LED. The interesting part is that this one is recharged by a USB port. (So if the power fails, you can find your way out of your cubicle...)
Besides its ubiquitous red baseball hats, Red Hat also handed out thousands of 4 oz. chocolate "Shadow" bars, which were tasty and included a slip under the wrapper with an offer code for $100 off any Red Hat class for the rest of 2005 (extra sweet... BTW, the code # is 458551W).
Novell also offered green SUSE hats at their booth, some of which were later found on the "Pike 'O Shame" at the Fedora booth, along with a few skewered cuddly SUSE lizards (the announced Open SUSE will have some proprietary Novell code which is encumbered by cross-licensing deals for now - but they intend to 'open' that soon, but apparently not soon enough for Fedora loyalists). See, this still is a colorful community!
Oracle ran its annual Linux install-fest on two "Birds of a Feather" (BOF) evenings (thanks to Kurt Hackel for organizing it so well) and participants at this BOF got trial SUSE and Red Hat software CDs and NEW Oracle 10g R2 CDs plus a light dinner. And I think that was the only food at the BOFs.
Of course, there were several great BOFs, including an open session with the notorious CmdrTaco of Slashdot. And that may have kept too many people up too late.
(Some of these may appear in the NewsBytes section of the Linux Gazette. Here's a more complete list at the LW2005 Press site, But a few new efforts should be noted:
Splunking, anyone?
Although Google and MS may provide some competition, an SF startup is targeting the search and virtualization of Data Center logs and other info sources. The idea here is to allow systems administrators, support staff, and developers to diagnose and resolve problems faster by having needed meta data in one place.
Splunk finds and organizes multiple files in search databases and provides a report language. Data can be entered by reading files in specified directories, tailing live files, or listening to messages sent via named pipes. Currently, searches are by key word, time and specified text strings. The Splunk server has software that tries to discover event relationships across network and machine domains.
For now, you can get the free Splunk Personal Server from splunk.com, get a public beta of their new product, or go to the project pages.
Open Source Hardware?
OpenGear demonstrated very cost-effective KVM consoles with remote access features. Not very dramatic, except OpenGear is participating in the 'okvm project' (see the project page) which is developing Open Source console and KVM management software and is hosted as a SourceForge project. And okvm also is developing plans for open source KVM hardware. This stuff...
OpenGear currently sells an 8-port KVM for only $500 and a 48 port unit for only $1500 retail. A review in ITworld was very favorable. Visit opengear.com for more info on their KVM over IP products and the move to Open Source hardware.
Make the move
Resolvo Systems offers its MoveOver product in desktop and enterprise editions. The idea here is to automate the transfer of a user's Windows desktop settings, files, and environment to Linux. Resolvo makes a limited edition available free to individual users and has contributed source code to the new OpenMoveOver project on SourceForge.
As part of an LW2005 promotion, Resolvo is also offering its Quro Linux management product free to new registrants. It acts as a domain controller and LDAP server. Check that out at Resolvo.com.
Resolvo is partnering with CodeWeavers for migration to the CrossOver office suite and Moveover Enterprise was a Best Integration Solution Finalist for this year's LinuxWorld.
Session PDFs... Best if you use the expo web site to get the session number first.
Here is a list of the session tracks...
Here is a link for all the LW2005 Product Excellence Awards finalists... what the competition was...
LinuxWorld-Boston is scheduled for April 03-06 in 2006. And LinuxWorld-San Francisco 2006 will probably be held next summer. Drop in if you are on either coast then.
Howard Dyckoff is a long term IT professional with primary experience at
Fortune 100 and 200 firms. Before his IT career, he worked for Aviation
Week and Space Technology magazine and before that used to edit SkyCom, a
newsletter for astronomers and rocketeers. He hails from the Republic of
Brooklyn [and Polytechnic Institute] and now, after several trips to
Himalayan mountain tops, resides in the SF Bay Area with a large book
collection and several pet rocks.
By Edgar Howell
Let's get the negative out of the way up front: to my mind the contents of this book are not hacks. O'Reilly does have a "hacks" series and insiders may well have known what to expect but I consider the title at best a misnomer. There's no real programming, no patching object code, no modifying a module to make some other piece of hardware work.
This is a book about using Knoppix, um, disaster recovery, er, fixing Linux -- Windows even -- well, about so much that almost everybody ought to read it. It would be a shame if the title were to turn off potential readers. If it hadn't been for the sub-title, "100 Industrial-Strength Tips & Tools", I likely wouldn't have bothered to look at it at all.
The people that won't be interested in it are those who have no interest in computers per se and just started using GNU/Linux because they want a stable environment in which to get their work done without having to "open the hood" as it were.
But even old hands ought to find this book of interest, if for no other reason than the different approach often used: booting Knoppix on a machine with some other system(s) installed and then working on it -- frozen in time -- without necessarily changing anything or otherwise impacting what is there.
Perhaps it is worth mentioning for those still unfamiliar with Knoppix that it boots a PC, determines what hardware is available, and brings up a graphics interface in which all drives and partitions are accessible in read only mode until you deliberately change that status.
Consider the following examples: do you need or want to
At current writing Knoppix version 4.0 has been out for easily two months, mine on DVD. Although the CD that comes with the book is version 3.4, this should not be a problem when you get the DVD. The "hacks" should still work as long as you watch out for changes in file structure.
The book includes the following chapters (hack numbers in parentheses):
Although the entire book is a definite re-read, at the moment my favorite chapter is 8, with introductions to numerous varieties of Knoppix such as:
I can't remember when I have had a book in my hands so consistently filled with interesting and useful information from cover to cover. It was a delight to read and is full of (mostly) short sequences of CLI commands that can be used after changing only device names. And in passing it teaches more than one might expect, e.g. the usefulness of chroot on-the-fly.
Knoppix belongs in everybody's toolbox and Kyle Rankin's "Knoppix Hacks" (O'Reilly, ISBN 0-596-00787-6) belongs on everybody's bookshelf. It is well worth the $30 price and I highly recommend it to anyone seriously interested in PCs.
Edgar is a consultant in the Cologne/Bonn area in Germany.
His day job involves helping a customer with payroll, maintaining
ancient IBM Assembler programs, some occasional COBOL, and
otherwise using QMF, PL/1 and DB/2 under MVS.
(Note: mail that does not contain "linuxgazette" in the subject will be
rejected.)
Many years ago, I was taking a LOT of pictures at work with an early technology digital camera. It provided images in JPEG format. I was in the process of modifying the images to correct problems with contrast when I discovered that the camera did a lousy job of compressing the files.
With the standard software provided by the Independent JPEG Group (which contains the marvelous jpegtran utility), I could often reduce file size by 50%. Losslessly. How does this work? A standard JPEG image uses encoding parameters from sub-sections of the image. An optimized JPEG image uses encoding parameters from the entire image. While the JPEG algorithm is already lossy, the jpegtran utility preserves the "lossiness" of the original JPEG image exactly. The new file is thus lossless with respect to the original file.
Here's the command I used:
# jpegtran -optimize < original.jpg > optimized.jpg
Unfortunately, I'd taken several hundred photos, and processing them one at a time by hand would have taken forever. So... I wrote one of my first major shell scripts ever. Once I had it working, I was able to optimize all of the images in less than ten minutes! The opt-jpg script was born.
A few weeks later, I discovered that "progressive" JPEG images were sometimes even smaller, but not always. Thus, I modified my script to try it both ways, and to keep whichever result was smaller. This made for a trickier script, but the results were worth it. The opt-jpg script was improved.
And even later, an unfortunate event with misnamed BMP images forced me to add error-checking, so that the script wouldn't modify non-JPEG files. The opt-jpg script became more robust.
Further time resulted in a GIF optimizing script (based on gifsicle) and a PNG optimization script (based on pngcrush) as well, called opt-gif and opt-png, respectively. These routines work by colormap reduction, filtering changes, and other such tricks. You'd be amazed at how many images out there have a enormous 256-entry colormaps and only use 3 or 4 of the entries. I recently packaged all of these scripts together and published them as the littleutils.
While my original motivation in writing these scripts was for dealing with lousy digital cameras, they are also well-suited for optimizing all of the graphics on a web site. Why optimize your graphics? To save hard drive space. To fit more images on your site. To reduce the amount of time it takes for visitors to your site to load your pages. The reasons are obvious.
So how does it work? We'll demonstrate with the web pages of the Linux Gazette itself. First, let's get all of the website files copied onto a local hard drive. The following command sequence (under bash) will accomplish this:
# wget --no-directories --timestamping --recursive --level=1 \ --accept=.gz http://linuxgazette.net/ftpfiles/ # for i in *.tar.gz ; do tar -xvzf $i ; done
And before we begin, we need to establish how much filespace our current images require:
# cd lg/ # find . -name "*.jpg" -print | tar -cf ../jpg.tar -T - # ls -l ../jpg.tar # find . -name "*.jpeg" -print | tar -cf ../jpeg.tar -T - # ls -l ../jpeg.tar # find . -name "*.JPG" -print | tar -cf ../JPG.tar -T - # ls -l ../JPG.tar jpg.tar + jpeg.tar + JPG.tar = 44288000 bytes total # find . -name "*.gif" -print | tar -cf ../gif.tar -T - # ls -l ../gif.tar gif.tar = 13066240 bytes total # find . -name "*.png" -print | tar -cf ../png.tar -T - # ls -l ../png.tar png.tar = 21596160 bytes total
Next, you'll need to download and install the littleutils. It's a pretty standard "./configure && make && make install" routine. Once that's done, we can optimize the images:
# find . -name "*.jp*g" -exec opt-jpg {} \; # find . -name "*.JP*G" -exec opt-jpg {} \; # find . -name "*.gif" -exec opt-gif {} \; # find . -name "*.png" -exec opt-png {} \;
After some lengthy period of time (PNG optimization is particularly slow), the images will be fully optimized. If the tar commands above are repeated, we get the following results (over a 6-megabyte improvement!):
jpg.tar + jpeg.tar + JPG.tar = 41185280 bytes total (a 7% savings!) gif.tar = 12759040 bytes total (a 2.5% savings) png.tar = 18452480 bytes total (a 15% savings!!)
Also, if you scroll through the results, you'll find that several files are misnamed. In particular, there were a lot of GIF images posing as PNG images. (Apparently a few people out there think that "mv image.gif image.png" is an easy way to convert image files. Not quite...) There were even a few Windows BMP images posing as PNG images. <blegh> A complete list of these files can be found here: badfile.txt. If these files are properly renamed and optimized (or better yet, properly converted and optimized), then further filespace savings can be achieved.
[ Thanks for spotting those for us, Brian; they're all fixed now. I take some small pleasure in noting that all the errors, with the exception of one class, were from before I took over running LG. The errors that came from me - mea culpa - resulted from the fact that "convert" fails to actually change the image type if the original file has a mismatched extension; that script is now also fixed, with the image types forced to the correct ones. -- Ben ]
While this example clearly shows that littleutils can be used to achieve considerable filespace savings, there are two major caveats:
[1] The image optimization in littleutils is aggressive, with all extraneous information being thrown away. This includes comments, ancillary tags, colorspace information, etc. You ought to run a few test cases before optimizing your entire Photoshop collection.
[2] The image optimization in littleutils does not preserve interlacing. GIF and PNG images will always have interlacing removed, and JPEG images may be converted to progressive or non-progressive (depending on which is smaller). If interlacing is particularly important to you, you'll need to skip optimization or modify the scripts to keep the interlacing as you want.
However, for most website purposes, the optimization scripts found in littleutils work quite well. Merry optimizing!!
For further website optimization, you might also consider using the repeats utility, also from littleutils. This nifty script will find duplicate files in any directory tree. If run in the Linux Gazette directory, the following duplicate files are found: repeats.txt. To reduce website filespace requirements even further, all but one of the duplicates could be deleted, and the HTML references to the deleted duplicates could be pointed to the remaining copy.
Brian Lindholm is a Virginia Tech graduate and middle-aged mechanical engineer who started programming in BASIC on a TRS-80 Model I (way back in 1980). In the late eighties, he moved to Pascal and C on an IBM PC-compatible.
Over the years, Brian became increasingly disgruntled with the instability and expense of the various Microsoft operating systems. In particular, hehated not being in full control of his system. MOST fortunately for him, however, he had a college roommate who ran Linux (way back in the Linux 0.9 and Slackware 1.0 days). That introduction was all he needed.
Over the years, he's slowly learned more and more, and now manages to keep his Debian system running happy and stable (even through two major upgrades: 2.2 to 3.0, and 3.0 to 3.1). [A point of note: His Debian system has NEVER crashed on its own. EVER. Only power failures, attempts to boot off the wrong partition, and a particularly flaky IDE Zip drive ever managed to take it down.] He loves Vim and has found Perl amazingly useful at work.
In his non-Linux life, Brian helps design power generation equipment (big power plant stuff) for a living, occasionally records live music for people, reads too much science fiction, and gets out on the Appalachian Trail as often as he can.
One of the biggest noises in Open Source this year was made by Google's announcement of the Summer of Code: a project to encourage students to participate in the development of open source projects during the summer holidays.
Google teamed up with a number of "mentor" groups, who each provided a list of project ideas to the hopeful applicants. The applicants could then write a proposal for Google. Google sorted the list of applicants by project, and provided the mentor groups with a list, and a number of applicants they could choose.
The deadline for applications was June 14th; successful applicants were notified on the 25th, to have their project deliverables turned in by September 1st. The list of successful projects will be (will have been) announced on October 1st. (I have attempted to build a list of projects, sorted by mentoring organisation, which I have included as an appendix to this article.)
Successful participants receive $4500 for their efforts, with a further $500 going to the mentor organisation. Though Google had originally intended to select 200 applicants, the response was so high (8700 has been quoted) that in the end they chose 410.
This article, containing advice to future applicants, explains how the selections were made:
We received 126 applications. We selected 12 of these for approval. Google gave us 10 slots, so the top 10 choices were funded. It was very easy to reject perhaps a third of applications that were very poor, indeed. Where do people get the notion that cutting and pasting my own text from the project description so that I can review it is worth $4500? Who are the people who think that one-line or one-paragraph applications will get them the stipend? Some of these very brief applications were written by people with impressive resumes. It was obvious they could do better.
Unlike previous "bounties" for open source projects (Novell, for example, announced a set of Gnome Bounties a few months previously), Summer of Code was strictly for students. There were no restrictions on the basis of nationality, but proof of attendance at a third-level institute was a requirement in addition to the proposal, in which academic merit was to be shown, and deliverables defined.
Google have been roundly applauded for their Summer of Code idea, though it hasn't been without criticism. The amount of money in particular has been a source of complaint, as being well below the amount a summer intern would earn in the US. There have also been problems for participants outside the U.S. Although American students will not have to pay tax on the money from the Summer of Code, that does not apply to non-Americans, and for participants in countries that do not have tax treaties with the U.S., both local and American tax apply.
There has been some complaint about the choice of projects. One project that has been singled out as a waste of funding is The Sneaky Snakes, a Snake clone in Python, mentored by the WinLibre project. (In my opinion, that organisation deserves criticism for squandering funds; not for choosing the Snake clone, but for choosing three separate but similar updaters).
There was also quite a lot of disappointment from those who weren't accepted. The Summer of Rejects site has a list of Rejected Ideas, and the summer-discuss mailing list (Google group) had quite a few messages from people who intended to complete their projects without funding. (I haven't found any evidence of that happening, though).
The process wasn't problem free for the mentors either. In this message to the 'wine-devel' mailing list, Jeremy White of Codeweavers said:
To be honest, we're running a bit behind here; I think Google jumped into this without being as prepared as you might otherwise hope, and there have been challenges in figuring this all out. (We're still largely clueless about the responsibilities of mentors from Googles perspective, for example).
Several of the mentoring organisations had already been running bounties of their own. Some of them have mentioned the fear that SoC will have a negative effect on the bounties offered by open source projects:
Working for "free" is one thing; getting paid to work on Free and Open Source software is quite another. Google's funding of the various projects didn't raise notable problems, though it may have raised expectations for the future.
"One minor issue for us it that the reward offer is quite sizable, and was, in most cases, larger than what the bounty would have fetched in the usual Bounty system employed within Ubuntu," Ubuntu's Weideman said.
"This expectation will need to be managed."
(I think it's fair to say that if Ubuntu had come up with a list of projects specifically for Summer of Code, instead of merely pointing to their existing task list, then they wouldn't have this problem).
There were also a few odd wrinkles. This message on Subversion's SoC list shows a common event in open source: the applicant found another project that performs most of the same work as his proposed project.
Now, this is a little tricky: commitmessage.tigris.org is a separate project from Subversion, yet part of the point of Summer of Code is for the student to actually work with the designated mentoring organization -- which in this case is the Subversion project. So we have an unusual situation going on here. Brian submitted his code to commitmessage.tigris.org, which is entirely maintained by one person, Stephen Haberman. However, his mentoring organization is the Subversion project, which has thirty to eighty developers (depending on how you count), and naturally, the dynamics of the two projects are significantly different. Google's Summer of Code program is more geared toward the second kind: it's meant to give students a chance to work with large teams and their collaborative mechanisms.
Google's Chris DiBona, who was Google's public face of the Summer of Code, has expressed a hope to repeat the Summer of Code, but the success of this year's competition will first have to be determined.
The information available about the chosen projects has been somewhat uneven: ironically, the projects mentored by Google are among the hardest to find information about.
Several of the participants kept blogs detailing their progress: some of these are aggregated at PlanetSoC. There's also a mugshot gallery of some of the participants.
In some cases, the work has been good enough to release: Mono 1.1.9 includes the Summer of Code work on JScript and the MonoDoc browser, Wine 20050830 includes theming and Mozilla work, etc.
Many of the projects were the usual sort of development tasks available in open source; understandable, as the participants only had two months in which to complete their projects.
Some, however, were extremely ambitious ideas. "JavaEye", one of the Project Looking Glass projects, has this in the introduction of its proposal: Has anyone watched Minority Report? Have u seen Tom Cruise just use his hands moving in a 3D space to control the computer? Do you want to do it in the next generation Java Desktop? If yes, I can make it to become true. Perhaps this was a little too ambitious (at the time of writing no code had been made public), but it's certainly an intriguing idea.
Both Gnome and Gaim had similar ideas. CamTrack (Gnome) is an accessibility aid, which allows the user to move the mouse cursor with the motion of their head using a webcam. CamTrack is available for download. CrazyChat (available from Gaim CVS) determines facial expressions and head movements, again using a webcam, and controls a 3D avatar using this information.
Portland State University has, among other ambitious projects, a software implementation of 802.11; KDE has del.icio.us-like tag browsing for the file system; Gaim has a collaborative code editor... I could list my favourite project ideas, but I spent quite a while building the list below, so feel free to pick your own!
I couldn't find any information about the projects chosen by JXTA, LiveJournal, or Asterisk.
Like many other projects, Python coordinated their SoC projects through a mailing list and on a wiki.
Though Gallery set up a Google group for the projects, it went almost unused. They also set up a Sourceforge project, which thankfully didn't.
Project Looking Glass is Sun's Java-based 3D desktop system. Most of the projects here are reimplementations of common desktop utilities, with a 3D twist. The code for these projects is available from the lg3d incubator project on java.net.
Aside from the announcement of the successful projects, I haven't been able to find much information about the nmap projects.
WinLibre (projects) is a project to provide open source software for Windows.
Because Jabber is a standard, not a project per se, the projects listed cover a range of clients and platforms under the Jabber umbrella.
Very few of the Fedora projects are specific to Fedora, and mostly deal with general improvements to Linux.
Information about Internet 2 projects is quite sparse. The only information I could find is in a set of minutes (text, scroll almost to the bottom), and I have to admit that none of it means anything to me.
The Bricolage site was unavailable at the time of writing: We're working on recovering the data from a catastrophic failure. Hopefully, they'll have the problem fixed soon: you'll find their projects here.
As well as the Summer of Code projects, Horde also has a bounties page.
Miguel de Icaza's blog has a summary of the status of the SoC projects, among other things.
Two of these projects were never started.
On the projects page, the Perl projects are listed with interviews with their creators, which are linked to below, apart from BitTorrent (interview) and WWW::Kontent (interview, journal).
As a university, it's unsurprising that the academic merit of these projects is quite high. (See also some of the other projects completed by the "OSS class").
This information wasn't available on the Monotone website; thanks to Nathaniel Smith for providing this information. He also said that all of the applicants were successful.
Gaim was one of the best documented SoC mentor organisations. As well as a project page with very clear descriptions of each project, they also provided blog space for each participant, as well as a Planet Summer of Gaim aggregator.
Few, if any, of the NetBSD projects are Linux-relevant, but I'm including them for completeness.
The Semedia projects page isn't very clear on this: I'm assuming that the projects that have had a mentor assigned are the projects being worked on.
As well as providing a mailing list and blog space (unused), the Samba team have provided space for the SoC projects on their projects website.
The Blender projects mostly go way over my head. Code is available here.
Rather than set up a separate project page, Ubuntu integrated the SoC projects with their existing bounties.
The list of projects appeared in Wine Weekly News, with a note of disappointment that at least one had been dropped, and there had been no code from another.
Of these projects, the Mozilla, Theming, and DirectInput have been added to Wine CVS.
Yet again I have to throw my hands up in defeat, though I was able to find one project.
Jimmy is a single father of one, who enjoys long walks... Oh, right.
Jimmy has been using computers from the tender age of seven, when his father
inherited an Amstrad PCW8256. After a few brief flirtations with an Atari ST
and numerous versions of DOS and Windows, Jimmy was introduced to Linux in 1998
and hasn't looked back.
In his spare time, Jimmy likes to play guitar and read: not at the same time,
but the picks make handy bookmarks.
By Pramode C.E.
The ideal programming language for beginners should offer the minimum barrier between thought processes and their concrete implementation as programs on the machine. It should not be a ‘toy’ language - its structure should be rich enough to express complex computer science concepts without being awkward. The language should encourage good coding habits and students should be able to look at it as an extension of the three things which they have already mastered to varying levels of proficiency - reading, writing and mathematics. Do we have such languages? Yes - the programming language Scheme fits in admirably.
DrScheme is a superbly designed programming environment for a family of implementations of the Scheme programming language. Combined with a great, freely available textbook, How To Design Programs which lays emphasis on the program design process rather than on finer algorithmic/syntactical details, DrScheme is bringing about a revolution in the way elementary computer programming is taught to school/college students. The objective of this article is to provide a quick introduction to the fascinating DrScheme environment as well as the Scheme programming language; it would be great if some of the teachers who might be reading this get sufficiently motivated to give DrScheme a try in their classes!
Why Yet Another Language, you might ask. Why not just teach the kids one of C/C++/Java, the three most popular programming languages?
I have been teaching introductory classes in computer programming using C for the past many years. Most of my time is spent on teaching the intricacies of the language's low level syntax, especially pointer handling, memory allocation/deallocation etc. But isn't this an essential part of the profession of programming - are we not supposed to understand how things work deep down? Yes - but there are two problems here:
Precompiled binaries of DrScheme can be downloaded from the project home page. The executable is called ‘drscheme’. Here is a snapshot of the integrated development environment which includes an editor, an interactive interpreter prompt and a ‘stepper’ for visualizing the evaluation of Scheme expressions.
The work area is divided into two parts - the upper part is the editor and the lower part, the interactive interpreter prompt. Short code segments can be typed directly at the interpreter prompt and get evaluated immediately. Longer programs can be written in the editor window and executed by hitting the ‘run’ button.
Here are two simple Scheme expressions:
(+ 1 2) (+ (* 2 3) (* 4 5))which evaluates to 3 and 26 respectively. What one notices immediately is the use (or abuse, as some might say) of parentheses as well as the unusual prefix notation. Both have their own logic. In a conventional language like C, a function call looks like this:
fun(1, 2);This is prefix notation (‘fun’ comes before the operands). But operators work in an infix manner:
1 + 2Scheme doesn't differentiate between operators and functions - operators too are functions and both are applied in a similar way; in uniform prefix style. Sacrificing ‘conventions’ to get a highly regular structure isn't too bad an idea.
The DrScheme editor performs parenthesis matching automatically; so you need not spend time trying to do it by hand.
A DrScheme program is composed of a set of Scheme expressions - the form of an expression can be captured concisely as:
(action arg1 arg2 arg3 ... argn)What if we type:
(1)at the interpreter prompt? DrScheme generates an error message:
procedure application: expected procedure, given 1 (no arguments)The designers have taken special care to make the error messages simple and meaningful.
In Scheme, all computations are done via the application of functions. Defining a function is very simple:
(define (my-sqr x) (* x x))The function can be called like this:
(my-sqr 3)The ‘everything as function’ point of view simplifies things a great deal. Here is how we write an if statement in Scheme:
(define a 1) (define b 2) (if (< a b) (+ 1 2) 5)The general format of an ‘if’ statement is:
(if expr1 expr2 expr3)You might think of ‘if’ as some kind of special function which takes three expressions as arguments, evaluates the first expression and evaluates the second expression only if the first one is true; otherwise the third expression is evaluated. The value of the expression is returned.
Here is how we write the famous factorial function as a Scheme program:
(define (fact n) (if (= n 0) 1 (* n (fact (- n 1)))))It's essential that students get a feel of how the expression, say,
(fact 3)is evaluated by the interpreter. One of the advantages of the ‘functional’ approach adopted by Scheme is that most evaluations can be modeled as simple substitutions. Thus, the interpreter would, in the first step, substitute (fact 3) with:
(* 3 (fact 2))Here is how the process of substitution proceeds further:
(* 3 (* 2 (fact 1))) (* 3 (* 2 (* 1 (fact 0)))) (* 3 (* 2 (* 1 1))) (* 3 (* 2 1)) (* 3 2) 6An interesting feature of the DrScheme environment is a ‘stepper’ which helps you visualize this evaluation process. Here is a snapshot of the stepper in action:
Note that we are presenting recursion as something quite natural: contrast this with the way we introduce it in ‘conventional’ languages - as something magical and mysterious!
We have seen the use of the ‘define’ keyword to bind values to names. Let's try the following experiment:
(define a 10) (define a (- a 1))we are trying to write the Scheme equivalent for the C statements:
a = 10; a = a - 1;We note that Scheme is giving us an error message:
define: cannot redefine name: bSo, ‘define’ is not the same as the classical assignment operator! In fact, we use a different operator to do assignment, ‘set!’. Let's try:
(define a 10) (set! a (- a 1))You might either get a syntax error or you will see that set! has the desired effect. The behaviour depends on the current language ‘level’ which you have chosen. Hit Alt-L and a window pops up:
You see that the language level currently in effect is ‘Intermediate student with lambda’. Each ‘level’ exposes the student to a part of Scheme. Because assignment is a complex operation, it becomes available only when you choose the ‘advanced student’ level. The obvious question now is:
Let's think of the factorial function expressed as a C routine:
int fact(int n) { int f = 1; while(n != 0) { f = f * n; n = n - 1; } return f; }what if we interchange the two statements within the body of ‘while’ loop? We get a serious error - the function now returns 0 for all values of ‘n’ other than 0.
The assignment operator introduces complex ordering dependencies between statements in our program. A beginner will have to spend considerable time and effort before he purges all ‘ordering bugs’ out of his code. In a ‘pure functional’ style of coding where we do not make use of assignment, such bugs do not occur. It so happens that a lot of very interesting programs can be constructed without using assignment. There are also plenty of programs (mostly modelling real-life entities like say a bank account) which can be built only by using explicit assignment. An intelligent teaching strategy is to first let the students gain enough experience writing ‘pure-functional’ code and then introduce assignment as an advanced operation.
Functions are first class objects in Scheme - they can be stored in variables, passed to and returned from other functions.
(define a +) (a 1 2) ; evaluates to 3 (define (my-sqr x) (* x x)) (define (foo f x) (f x)) (foo my-sqr 3) ; evaluates to 9It's possible to create functions ‘on-the-fly’ using ‘lambda’:
(define (foo f x) (f x)) (foo (lambda (x) (* x x)) 3) ; evaluates to 9The general format of ‘lambda’ is:
(lambda (arg1 arg2 .... argn) expression)It generates an ‘anonymous’ function which accepts ‘n’ arguments and whose body evaluates to the value of ‘expression’.
Look at the code below:
(define (add x) (lambda (y) (+ x y))) (define f (add 1)) (f 2)The function ‘add’ returns a function; in the body of this function, the value of ‘x’ would be the value it had at the point when the function was created. Such functions in which certain variables have values ‘captured’ from the environment which existed during the creation of the function are called ‘closures’.
Closures and higher order functions provide tremendous expressive power to the language; the computer science classic Structure and Interpretation of Computer Programs has a thorough discussion of this topic.
Let's define a function ‘fun’:
(define (fun x y) (let ((p (+ x y)) (q (- x y))) (* p q))) (fun 2 3) ; evaluates to -5The general syntax of ‘let’ is:
(let ( (var1 exprn1) (var2 exprn2) ... (varn exprn) ) body)In the body of ‘let’, var1, var2 etc will be assigned the values obtained by evaluating exprn1, exprn2 etc. These bindings are ‘local’ in the sense they vanish once the evaluation of the body of ‘let’ is over.
Using structures is extremely simple in Scheme:
(define-struct point (x y)) (define p (make-point 1 2)) (point-x p) ; evaluates to 1 (point-y p) ; evaluates to 2True to the functional nature of the language, the ‘define-struct’ keyword does nothing more than constructing three functions on-the-fly; a ‘constructor’ function called ‘make-point’ which builds an object of type ‘point’ and two ‘accessor’ functions ‘point-x’ and ‘point-y’ which help us retrieve the value of the fields x and y.
The ‘list’ data structure is central to Scheme programming. The three fundamental operations on a list are car, cdr and cons. The following code segment illustrates use of all three operations.
(define a (list 1 2 3)) ; create a list (1 2 3) and bind it to ‘a’ (define b '(1 2 3)) ; same as above; note the use of the quote (empty? b) ; returns false '() ; empty list (cons 1 '(2 3)) ; yields '(1 2 3) (car '(1 2 3)) ; yields 1 (first '(1 2 3)) ; same as above (cdr '(1 2 3)) ; yields '(2 3) (rest '(1 2 3)) ; same as above
It's interesting to note that we have seen enough of Scheme syntax to be able to write fairly sophisticated programs. The DrScheme programming environment comes with ‘teachpacks’ which offers additional (and often interesting) ‘library level’ functionality. Let's take the ‘add teachpack’ option from the ‘Language’ menu and include the ‘draw.ss’ teachpack. Then we run the following code segment:
(start 300 300) ; opens a window (define p (make-posn 0 0)) ; creates a ‘posn’ structure (define q (make-posn 100 100)) (draw-solid-line p q 'red) ; draws a line from p to q in color red
Let's play a game. We plot 3 arbitrary points A, B, C on a piece of paper. In addition, we plot an arbitrary initial point X1. Now, we roll a 3 sided die whose sides are labelled A, B, and C. If A comes up, we will plot the midpoint of the line joining X1 to A. If B comes up, we plot the midpoint of the line joining X1 to B; the case with C is similar. We will call this new point X2. Roll the die again. Plot the midpoint of the line joining X2 to either A, B or C depending on the outcome of the die toss. We call this new point X3. Repeat this process indefinitely.
A computer program allows us to easily plot thousands of such points. The image which arises out of this seemingly random process has a striking regularity. Let's write a Scheme program to implement our ‘game’.
First, let's define some utility functions:
(define (draw-point p color) (draw-solid-line p p color)) (define A (make-posn 150 10)) (define B (make-posn 10 290)) (define C (make-posn 290 290)) (define X1 (make-posn 150 150)) (define (average x y) (/ (+ x y) 2)) (define (mid a b) (make-posn (average (posn-x a) (posn-x b)) (average (posn-y a) (posn-y b))))The Scheme function invocation (random n) returns a random number between 0 and n-1. Let's write a function which returns either the point A or B or C depending on the toss of a die:
(define (A-or-B-or-C) (let ((i (random 3))) (cond ((= i 0) A) ((= i 1) B) ((= i 2) C))))The ‘cond’ expression returns A if (= i 0) is true; otherwise it returns B or C depending on which of the statements (= i 1) or (= i 2) is true.
We need an initialization routine which will open the window and plot the first 4 points:
(define (initialize) (begin (start 300 300) (draw-point A 'blue) (draw-point B 'blue) (draw-point C 'blue) (draw-point X1 'blue)))
Now its time to compose the main plotting routine (let's call it plot-curve). Here is the strategy:
(define (plot-curve x n) (when (not (= n 0)) (let ((i (mid x (A-or-B-or-C)))) (begin (draw-point i 'red) (plot-curve i (- n 1))))))The complete program is shown in Listing 1. The graphical output we get looks like this:
One interesting point about plot-curve is that we can almost read it aloud; there is not much difference between the pseudo-code description and the actual function. In a way, writing a program is akin to writing prose or poetry - a program is meant to be read, understood, appreciated and enjoyed by a human being besides being executed on the machine. The earlier one appreciates this fact, the better.
The peculiar syntactical structure of Scheme forces you to think of writing your code as layers of functions; it's virtually impossible to write spaghetti code (of course, nothing prevents you from purposefully obfuscating your code). Each function definition can be thought of as building up a vocabulary with the ultimate objective of describing the solution in terms of a language which is as close to its word-description as possible. SICP again provides some interesting examples of the ‘programming as language building’ activity.
The ‘turtle.ss’ teachpack is great fun to play with. Load this teachpack and just type:
(turtles)at the interpreter prompt. You will see a window popping up with a small triangle at the centre; this is our ‘turtle’. You will be able to move this turtle all over the screen and make it trace out interesting figures by executing simple commands.
(draw 50) ; the turtles moves 50 pixels in the current direction ; also, a line of length 50 pixels is drawn (turn 72) ; turns the turtle 72 degrees counterclockwise (move-offset -200 10) ; change the position ; new horizontal position will be old + (-200) ; new vertical position will be old + 10A Koch curve is an interesting fractal figure. The figure below demonstrates the construction procedure:
We start off with a single line segment, then divide it into 3 equal segments. We draw an equilateral triangle that has the middle segment as the base. We then remove the middle segment. We apply this procedure recursively on all line segments.
The idea can be implemented as a simple Scheme program using the turtle graphics teachpack.
(turtles) (move-offset -200 0) (define (draw-koch level len) (if (= level 0) (draw len) (begin (draw-koch (- level 1) (/ len 3)) (turn 60) (draw-koch (- level 1) (/ len 3)) (turn -120) (draw-koch (- level 1) (/ len 3)) (turn 60) (draw-koch (- level 1) (/ len 3))))) (draw-koch 5 400)
Here is the output generated by the program:
We have seen that using Scheme, we can do cool math-oriented stuff very easily. This doesn't mean that the language can't be used for doing ‘real-world’ stuff like talking with a program over the network, reading from / writing to files, running multiple threads etc. For doing most of these things, we will have to choose the ‘PLT’ option from the ‘Language’ menu.
This article has presented a very superficial view of the DrScheme programming environment (as well as the Scheme language) - the best way to really get started is by reading How To Design Programs and visiting the TeachScheme Project Home Page. I am sure that the ‘Scheme Philosophy’ will radically change your approach towards teaching and learning elementary computer programming.
As a student, I am constantly on the lookout for fun
and exciting things to do with my GNU/Linux machine. As
a teacher, I try to convey the joy of experimentation,
exploration, and discovery to my students. You can read about
my adventures with teaching and learning here.
These images are scaled down to minimize horizontal scrolling. To see a panel in all its clarity, click on it.
All HelpDex cartoons are at Shane's web site, www.shanecollinge.com.
Part computer programmer, part cartoonist, part Mars Bar. At night, he runs
around in a pair of colorful tights fighting criminals. During the day... well,
he just runs around. He eats when he's hungry and sleeps when he's sleepy.
The Ecol comic strip is written for escomposlinux.org (ECOL), the web site that supports es.comp.os.linux, the Spanish USENET newsgroup for Linux. The strips are drawn in Spanish and then translated to English by the author.
These images are scaled down to minimize horizontal scrolling. To see a panel in all its clarity, click on it.
All Ecol cartoons are at tira.escomposlinux.org (Spanish), comic.escomposlinux.org (English) and http://tira.puntbarra.com/ (Catalan). The Catalan version is translated by the people who run the site; only a few episodes are currently available.
These cartoons are copyright Javier Malonda. They may be copied, linked or distributed by any means. However, you may not distribute modifications. If you link to a cartoon, please notify Javier, who would appreciate hearing from you.
From Jimmy O'Regan
ARCHAEOLOGY: FIRST COCKTAIL 5,000 YEARS OLD http://www.agi.it/english/news.pl?doc=200509101618-1098-RT1-CRO-0-NF51&page=0&id=agionline-eng.arab
Don't dumb me down http://www.guardian.co.uk/life/badscience/story/0,12980,1564369,00.html ("It is my hypothesis that in their choice of stories, and the way they cover them, the media create a parody of science, for their own means.")
The Six Dumbest Ideas in Computer Security http://www.ranum.com/security/computer_security/editorials/dumb (see also: http://www.dilbert.com/comics/dilbert/archive/images/dilbert2813960050912.gif)
- Oh, and this, just because I find it amusing:
- http://www.dilbert.com/comics/dilbert/archive/images/dilbert2045784050823.gif
From Jimmy O'Regan
Andy Hertzfeld's[1] Folklore.org is a 'kind-of blog'[2], which is intended to be used for collective historical story-telling. At the moment, it only has Hertzfeld's own stories about the early days of the Macintosh (now made into a book: http://www.amazon.com/exec/obidos/asin/0596007191/ref%3Dnosim/folklore-20/002-7725544-7696029) but those are pretty interesting.
Try, for instance, Donkey (http://folklore.org/StoryView.py?project=Macintosh&story=Donkey.txt), about one of the games that came with the original DOS PC:
"we were amazed that such a thoroughly bad game could be co-authored by Microsoft's co-founder, and that he would actually want to take credit for it in the comments."
Or "Black Wednesday" (http://folklore.org/StoryView.py?project=Macintosh&story=Black_Wednesday.txt): '"No, you're just wasting your time with that! Who cares about the Apple II? The Apple II will be dead in a few years. Your OS will be obsolete before it's finished. The Macintosh is the future of Apple, and you're going to start on it now!".
With that, he walked over to my desk, found the power cord to my Apple II, and gave it a sharp tug, pulling it out of the socket, causing my machine to lose power and the code I was working on to vanish. He unplugged my monitor and put it on top of the computer, and then picked both of them up and started walking away. "Come with me. I'm going to take you to your new desk."'
Reality Distortion Field (http://folklore.org/StoryView.py?project=Macintosh&story=Reality_Distortion_Field.txt)
............... "Well, it's Steve. Steve insists that we're shipping in early 1982, and won't accept answers to the contrary. The best way to describe the situation is a term from Star Trek. Steve has a reality distortion field." "A what?" "A reality distortion field. In his presence, reality is malleable. He can convince anyone of practically anything. It wears off when he's not around, but it makes it hard to have realistic schedules. And there's a couple of other things you should know about working with Steve." "What else?" "Well, just because he tells you that something is awful or great, it doesn't necessarily mean he'll feel that way tomorrow. You have to low-pass filter his input. And then, he's really funny about ideas. If you tell him a new idea, he'll usually tell you that he thinks it's stupid. But then, if he actually likes it, exactly one week later, he'll come back to you and propose your idea to you, as if he thought of it." ............... |
[1] Linux connection: founded Eazel, who developed Gnome's Nautilus file manager. Interview here: http://www.pbs.org/cringely/nerdtv/transcripts/001.html
[2] As you might guess, it's a blog for the past: it has the usual blog features: RSS feeds, comments, categories, and the like; it also has fields for the characters involved, related stories, and the date field is geared towards the original time of the story, as well as the ability to rate the story and add related external links.
The site mentions that the source code is to be released under the GPL, but almost a year and a half ago.
From Jimmy O'Regan
http://www-128.ibm.com/developerworks/power/library/pa-chipschall11/?ca=dgr-lnxw01E-Bookery
............... I've already got a host of ideas for video games to accompany classic literature:
............... |
I've got to confess: I got an e-book (the lure of new pTerry weeks early was too strong to resist). Yep, DRM sucks, as does the viewer software, but I have to take my hat off to the underhanded mind that decided the ebook would be encrypted with the credit card number used to buy the thing. That's just... devious.
On the plus side, I've got a new fortune file...
See attached thud
Oh, BTW, the book is great, as is "Anansi Boys" by Neil Gaiman
............... Mrs Higgler was older than Mrs Bustamonte, and both of them were older than Miss Noles and none of them was older than Mrs Dunwiddy. Mrs Dunwiddy was old, and she looked it. There were geological ages that were probably younger than Mrs Dunwiddy. As a boy, Fat Charlie had imagined Mrs Dunwiddy in Equatorial Africa, peering disapprovingly through her thick spectacles at the newly erect hominids. 'Keep out of my front yard,' she would tell a recently evolved and rather nervous specimin of Homo habilis, 'or I going to belt you around your ear-hole, I can tell you.' ... The creature laughed, scornfully. 'I,' it said, 'am frightened of nothing.' 'Nothing?' 'Nothing,' it said. [...] 'Are you extremely frightened of nothing?' 'Absolutely terrified of it,' admitted the Dragon. ............... |
From Jimmy O'Regan
http://www.theonion.com/content/node/40076/1
............... Our users want the world to be as simple, clean, and accessible as the Google home page itself," said Google CEO Eric Schmidt at a press conference held in their corporate offices. "Soon, it will be." ... "Book burning is just the beginning," said Google co-founder Larry Page. "This fall, we'll unveil Google Sound, which will record and index all the noise on Earth. Is your baby sleeping soundly? Does your high-school sweetheart still talk about you? Google will have the answers. ............... |
From Jimmy O'Regan
http://uncyclopedia.org/wiki/English-American_Dictionary
No, read it!
[Suramya] Nice one. Check out an American's guide to speaking British: http://www.effingpot.com
[Thomas] I just couldn't pass this up, without a reply.
[Sluggo] I knew you wouldn't be able to.
[Sluggo] That one's great. Comments:
FOOD & DRINK
Bacon.
Ugh. OK, my comments (as the closest thing to a resident bacon expert)...
[Sluggo] So that's why Canadians call Canadian bacon "back bacon".
[Jimmy] It's called 'back bacon' because... (drum roll...) it's made from the pig's back. Better quality meat. Usually, the bacon that's made into rashers comes from the pig's stomach (pork bellies are one of the more important agricultural commodities), and "dinner bacon" usually comes from the legs.
IIRC, 'back bacon' is actually ham
[Thomas] Hehe, so now you see? Indeed, one thing that I am quite annoyed with over here, is that pre-packed bacon tends to have the rashers packed with water --- so that when you cook then (fry it, usually), it doesn't fry but boil. The net result is that the rasher shrinks to nothing.
Well, there are two views you can take on this. Either: a) that's the idea, or, if you know a little more about bacon curing (which, unfortunately, I do), b) that's an unfortunate side-effect of making bacon more affordable.
(Or maybe even: c) a little from column A, a little from column B)
[Sluggo] Blancmange & pudding. My British friend said our pudding is "blanche menage". Not quite what this thing says.
[Thomas] Hehehehe, that's a nice play on words.
[Kat] I haven't read the link yet, but when Ben relayed "blanche menage" to me last night, I responded with "I've heard it called 'blench mange'" (pronounced as in the skin ailment, not as in the French).
[Sluggo] Doner. My mom used to make shish kebabs on a skewer, sans pita. Yum! A vertical lamb on a spit is a vertical lamb on a spit, not a doner or a gyro. Although you might call it "gyro meat". Now, what's the difference between these and a shaverma? (Note: gyro is supposed to be gyros (singular), although nobody except Greeks pronounce it that way.)
[Kat] Doner/gyros/shawerma-shwarma-shwerma-etc.: No difference that I have detected other than the national/linguistic origin of the cook/vendor. (Turk/Greek/Arab)
[Sluggo] Like Greek coffee vs Turkish coffee vs Arabic coffee.
[Ben] For me, there's actually a difference between Armenian and Turkish coffee [1], although it may well be connected to the specific areas of Armenia and Turkey from which the people who made the stuff for me have come. (I actually have a lot more experience with Armenian coffee, even after all these years in the US. Go figure.) I strongly suspect that there's a lot of overlap, particularly in regions that border on each other, but my sample set has not yet included that.
[1] As Sirdar Argic cheers his idiotic head off...
[Jay] Serdar.
http://en.wikipedia.org/wiki/Serdar_Argic
[Ben] Why, you turkey - OOPS!!!
[Jay] Benjamin Okopnik's criminal Armenian grandparents are responsible for the conversion of linuxgazette.com to a CMS.
[Sluggo] English muffin. What a hoot. What about Australian toaster biscuits (which I think are called crumpets in England). They're like an English muffin but more solid and sweet.
[Thomas] Hmm. A muffin is a muffin (unlike a crumpet, which is very airy, muffins are somewhat more solid, and have a much more floury taste).
[Sluggo] Scone. The hospital I worked at served them with raspberry jam. No clotted cream, tea, or strawberry jam. That may be highly gauche but they're great tasting that way.
[Thomas] Indeed. Mmmm, clotted cream is nice with them, though. As is just honey.
[Sluggo] Clotted cream sounds very tasty from all the descriptions I've read.
[Kat] There's an American tendency of late to doctor the dough itself with fiftykajillion additions, making it darned near impossible to find the standard thing other than in "tea shoppes".
[Sluggo] Scoff. Closest equivalent is "scarf it down". Kind of the reverse of the ass/arse thing. Scoff is what I would do if Ben said, "I'm an innocent lad who has never done anything wrong."
[Thomas] Indeed. The meaning of 'scoff' to show contempt has always been a much lesser use. But both usages are now acceptable.
[Kat] Well you should scoff there, it'd be utterly lacking in style points!
[Sluggo] Stuffed. "I'm stuffed," means very full, not just full. Especially used around Thanksgiving. It may have an additional meaning in Texas.
[Thomas] It's usually a very common saying around Christmas time (having just spent three hours eating a turkey and all the trimmings).
[Sluggo] Water. Why would you ask a salesman in a washing machine shop, "Is water metered here?" What does that mean?
[Thomas] 'metered' as in the amount of water used is registered on a meter (that is in units of something or rather).
[Sluggo] It sounds like "Mind the gap" or "Does the red light not apply?" (as a Dublin garda said).
[Rick] The latter does sound wonderfully Irish.
That's an Irishism? I suppose it must be: doubly so, as it reminds me of a story
I was walking down the street with my son's mother, many years ago when we were still a couple. She wasn't watching where she was going, and bumped into a man with a labrador who was coming the other way. He said nothing, and kept going; she decided that he bumped into her and inhaled deeply to start throwing abuse (she has a pretty short fuse).
I grabbed her arm and dragged her along. This shocked her into silence (I was hen-pecked, I can admit it) for a few seconds, before she started to protest.
"What's wrong with him. Is he blind?"
"Yes. Did the guidedog not give it away?"
[Rick] On the other hand, I think I wax positively English in the caustic wording I employ for notes on windshields of people who'ved blocked handicapped parking zones:
"Dear Sir:
I'm afraid that being morally handicapped doesn't count."
[Brian] Whereas the missive I'd prefer to leave on a windshield or five is caused by people with temporary (or permanent) handicap stickers, but literally no mobility or visible handicap I can discern that warrants being able to park specially. However, in many of these cases, I've been able to find another reason for their markings:
"I've been following you about to see whether the Handicapped Sticker was for your driving or parking (I hesitate to use the word 'skills', under these unfortunate circumstances). I'm so sorry that you seem to be doubly cursed."
[Kat] A gentle note here:
Mobility impairment does not necessarily manifest visually as lurching or limping or halt movement. It may be a case of something like congestive heart failure or some other incapacity of stamina rather than motion.
(This is not to say that I'm not aware of the tendency of some physicians to issue spurious placards for the benefit of conniving persons. Just that not all cases of invisible handicap equate to lack of need.)
[Sluggo] Yes. My mom is not visibly handicapped but it's hard for her to walk more than a block without resting, and the hot sun can exacerbate her condition (MS). So those handicapped parking spaces make a big difference. God forbid somebody should decide she's "not handicapped enough". People don't realize the enormity of problems handicapped people deal with...
[Rick] You might wish to know that "enormity" is not a noun used to denote magnitude. It means "great and notorious wickedness".
That's a common usage error. I'm guessing that someone once encountered a phrase like "sentenced for the enormity of his deeds" and erroneously concluded that "enormity" must be something like "enormousness". And thus the misconception became established.
[Jay] Indeed:
http://dictionary.reference.com/search?q=enormity
I hadn't realized it either. Course, it's less out of place in Mike's comment...
[Sluggo] Weird. I've only heard it the other way. Unless I've been misunderstanding it all this time.
[Sluggo] (a) being turned down for Access vans because she can use the bus lifts. "It's not using the lifts that's the problem, it's walking to the bus stop, which are now a half mile apart." And getting hit by cars crossing the street to said bus stop, as has happened twice.
(b) Section 8 (the apartment subsidy program) demands an inch-thick set of forms every year to justify the voucher, with doctors' statements, income and expense receipts. (That's the program Bush has frozen and wants to eliminate in a couple years.)
(c) an apartment manager with anger management problems.
(d) her car is an '84 Toyota held together by prayer. If it conks out, getting to medical appointments becomes a much bigger ordeal.
(e) writing to drug companies asking for reduced-price medications. Some companies require little paperwork; others a lot.
Just doing this, and grocery shopping, and looking for an apartment with a nicer manager is a full-time job in itself.
[Sluggo] All these verbs sound too formal for the situation. The equivalent here would be, "Do you have to pay for water around here or is it free?" But it's still a strange question to ask a washing machine salesman.
[Sluggo] Everyone buys water by the cubic foot, yes. Although in some cities the landlord has to include it in the rent so it's a de facto flat rate.
[Thomas] Indeed.
[Sluggo] One thing I noticed in England was a hotel with a machine in the shower charging for hot water. I was incensed.
[Thomas] That sounds stupid to me, too.
[Sluggo] I already paid for my room, dammit. Isn't a hot shower one of the basic things you expect in the deal? Unsure if electricity's really that expensive in England,
[Thomas] It is getting more expensive yes, as is gas.
[Sluggo] and if so, why tankless water heaters (which use a lot of energy) are so common.
White tea. If white tea means black tea with milk, what do you call real white tea? http://coffeetea.about.com/od/typesoftea/a/whitetea.htm
[Thomas] Pass.
[Sluggo] You mean your white tea isn't?
[Sluggo] SLANG
Bang/chat up/cram/fluke/haggle/hanky panky/hunky-dory/nookie/not my cup of tea/piece of cake/puke/put a sock in it/round/sacked/sloshed/suss/twit. All used here.
[Thomas]
[Sluggo] Cheesed off. That's the funniest one on the list. It doesn't really work here; it sounds too much like cheesy or cheesehead.
[Thomas] c.f. (although not strictly related): "Hard Cheese".
[Kat] ...? Where here? Works for my set, but it's a tainted sample as it's full Anglophiles.
Kat, who's been horrifying Ben by spouting "Southernese"
[Rick] Which, I would think, remains vanishingly rare in Florida, whose cultural orbit oscillates mostly between Long Island and Havana, I had thought.
[Jay] Florida's not in the South; surely you knew that?
[Rick] That would have been my strong suspicion -- though my brother-in-law from the Panhandle makes (embodies) a pretty good argument for those aforementioned pockets of Dixie.
[Jay] Well, yeah, some of the South is in Florida.
Cheers, jr 'and it's always acceptable to be a redneck' a
[Rick] Except in Pensacola and like that.
[Ben] Oh, Northern Florida is quite the country unto itself. Twenty miles inland, the whole "southern cracker" thing is alive and well - including the slave camps (a.k.a. "labor pools", recently exposed in a few of the local papers), the accents, the attitudes... hell, up until the late '70s, Bunnell - about thirty or forty miles south of St. Augustine - was the state HQ of the KKK. I take especial joy in riding through that kind of places with Kat behind me and watching those jaw muscles tighen up; I wish they would say something to me.
It's a high-contrast place we live in; lovely, friendly, St. Augustine [1], with half a dozen yoga studios, visiting Tibetan monks, and a liberal arts college - and labor pools twenty miles away. Meanwhile, the (local) accents here are an amazing lazy-mouth gumbo, damn near a Louisiana/Alabama mush with an occassional burr thrown in for variety. It can be lovely as a flavoring (say, in a librarian from Hastings whom I met at a party, who sounded like fine velvet feels), but feels like a chainsaw two inches inside your ear when used in its raw form.
Kat hasn't gotten that one down yet, and I'm quite grateful.
[1] Named, mind you, after the most vile, intolerant, rigid, asexual freak - oh, and a brilliant logician - who ever poisoned a religion from the inside... did I mention "contrast" already? Oh, good.
[Sluggo] Healthful vs healthy. Not sure what he means. People are healthy if they're not sick. Food is healthy or healthful. But a healthy snack is a big snack, which is probably not healthful.
[Thomas] Apply Modeus Tollens to that, to see where you get. Indeed, we know what we mean by it.
[Sluggo] Apply what? Is that like Igpay Atinlay?
[Rick] Modus tollens. Contrapositive. http://en.wikipedia.org/wiki/Modus_tollens
[Sluggo] Knuckle sandwich. Also used here. But it's more of a 50s expression.
[Thomas] Yeah, that's like so last year.
-- Thoomas Adam ^^
Is this the all new, super modern, Object Oriented Thomas?
From Sluggo
I looked up kebab & co. in the dictionary (Webster's New World), then 'dict', then meandered to Wikipedia, and along the way found out some stuff about Old Bailey.
Webster's says both kebab and shish kebab mean cubes of marinated meat on a skewer with onions and tomatoes. That's what my mom used to make. When I later encountered them in pitas, I wondered, "Why would you want to muffle the wonderful taste of a shish kebab by imprisoning it in a pita? And why are they calling them kebabs when they're gyros?" Webster says kebab is an Arabic word but claims shish is Armenian.
Wikipedia discusses the variation and also gets into doners (doener, donair).
http://en.wikipedia.org/wiki/Kebab "There are many varieties of kebab and the term means different things in different countries. The term kebab without specifying the kind refers to döner kebab in Europe and to shish kebab in the United States." OK, that squares with my experience.
"Take-out gyros is quite popular in the United States where it is usually beef and lamb, shawarma is available in ethnic neighborhoods but döner kebab is unknown." Yes. Thomas (I think) mentioned a class difference between gyros and doners, that gyros were acceptable but doners were the province of yobs drinking after 11pm. Interesting that we don't have doners, but we don't have gyros after 11pm either. We do have yobs but they're called frat boys , and their favorite food after 11pm is... nothing, just alcohol. Well, maybe late-night pizza. If you did open a late-night doner stand, your first hurdle would be people asking, "What's that?"
http://en.wikipedia.org/wiki/Gyros Mentions gyros' relationship to souvlaki, which are essentially shish kebabs (sans pita).
http://en.wikipedia.org/wiki/Shawarma "In Russia Shawarma (Шаурма or Шаверма) (shaurma or shaverma) became one of the most popular street foods. Originally from the former Soviet Republics of Armenia and Azerbaijan, shawarma in Russia is generally eaten with a variety of julienned vegetables, tomato sauce, and garlic sauce that is wrapped in lavash." OK, that's what I encountered in St Petersburg, called shaverma but looking to all the world like a gyro. (And I quickly started eating them every day for lunch, and pel'meni for dinner.) http://en.wikipedia.org/wiki/Pelmeni
The shavermas I got at a stand, but the pel'meni I bought frozen and boiled at home, much to the surprise of an old WWII widow who lived in the same communal flat. The gas stove scared me since I'd grown up with electric, especially since you had to light it with a match. Said babushka let me use her jar of matches. One stove in the other kitchen always had a burner on low. I asked why. She said, so the guy doesn't have to buy matches. (Gas was free, although electricity was, er, metered.) I was incredulous. "He can't afford to buy a box of matches?" She said, "He's a drunk."
http://en.wikipedia.org/wiki/Souvlaki
Regarding bailey and Old Bailey (a courthouse in London, which I first encountered in A Tale of Two Cities ) , I had assumed they were related to bail/bailiff, and were called that for some unfathomable reason. But Webster's says bailey is "the outer wall or court of a medieval castle; term still kept in some proper names, as in Old Bailey".
[Rick] Oddly enough, on a recent unplanned trip to London, I found myself with an entire afternoon in which my only obligation was to travel about 7 km to Liverpool Street Station, so I did it by shank's mare[1] -- and one of several places I stopped to gawk was the Old Bailey.
It's so named becase it was built next to Old Bailey Street, which lay just outside the perimeter wall -- the "bailey" -- of the mediaeval walled city. The original building fell to the Great Fire (1666), and it's been rebuilt more or less completely several times since then.
I also dropped into Hyde Park, stopped at St. Paul's Cathedral (how could I skip that?), wandered around the "City" legal district and the Barbican Centre, visited Dr Johnson's house, looked around the Guildhall, and probably visited some other places I'm forgetting at the moment.
(I had to make an unplanned side-trip into London from Glasgow because my passport had vanished, and so I was obliged to visit the US Embassy to get a new one. The whole gory tale is here: http://deirdre.net/posts/2005/08/glasgow-ricks-departure )
[1] Before you ask, it's an 18th C. Scottish tongue-in-cheek coinage. "To shank it" meant "go on foot", borrowing from the English "shank" for one's leg portion from ankle to knee: The joke based on that expression lay in referring to a "shank's nag" or "shank's mare" -- the mare of your shank being your own foot, in the absence of more-luxurious transportation. A modern analogue would be when someone suggests I buy some ludicrously expensive part for my bicycle to save weight, and I rejoin that I'd be "better off concentrating on lightening the bike motor".
And "bail" is the convergence of no less than four different words:
(1) [Old French baillir: to keep in custody, deliver] money deposited with the court to get an arrested person temporarily released.
(2) [Old French baille, bucket] a bucket or scoop for dipping up water and removing it from a boat. What Ben will be doing when he gets back to Florida.
(3) [Middle English beil, from Old Norse beygla, to bend] a hoop-shaped support for a canopy, a hoop-shaped handle for a bucket or kettle, a bar on a typewriter to hold the paper against the platen
(4) [Old French baile, from Latin bajulis, porter; this one is related to bailey] formerly an outer fortifictation made of stakes, a bar or pole to keep animals separate in a barn, a Cricket wicket.
From Jimmy O'Regan
For anyone who enjoyed Ben's gibberish Perl script back in issue 89 (http://linuxgazette.net/issue89/lg_mail.html#mailbag.2), there's a Perl module! http://search.cpan.org/~gryphon/Lingua-ManagementSpeak-0.01/lib/Lingua/ManagementSpeak.pm
I especially like the comment in the POD: "By the way, if you're reading this far into the POD, you've earned the privilege to gripe about any bugs you find."
From Sluggo
And now a break from hurricane news....
http://seattlepi.nwsource.com/local/242094_belltownshooting24.html
"A gunman on a motorcycle fired several shots into the air after he was kicked out of a Belltown bar early Friday, then shot into a crowd of people before he was brought down by police gunfire."
From Sluggo
[Jimmy] Referring to Mike's forthcoming article. Check next month's issue!
It's a bit rambling. Does it go into too many topics? Would it be better split into multiple articles?
[Ben] Nope - works fine as a retrospective (which, given your long gray beard, shambling gait, absence of teeth, and the vacant look of someone who lives almost totally in the past, is highly apropos.
(http://linuxgazette.net/gx/2003/authors/orr.jpg , for those who don't know Mike and want to view the 92-year old physical wreck - the bier isn't quite finished, so we had to prop him against the nearest wall. Tisk, how some people let themselves go...)
I really enjoyed reading this thing, including the linked article by Ted Nelson:
In a way the first luncheon speaker and his enthusiasm perfectly embodied the ideas and controversy of the conference. This was Representative Charles Rose, who wore a strinking plaid jacket.
I hadn't realized that there was a portmanteau word that encompassed "striking" and "stinking" [1]; I'm shocked by the fact that the original Committee That Designed English omitted it, and I'm glad, well and fully satisfied, that this terrible injustice has finally been corrected. I will surely find many uses for it from now on - say, at the end of every sentence, strinking.
[1] Oh, his poor audience. To be visually and olfactorily assaulted at the same time... I'll be his voice sounded like nails on a chalkboard, too - just to make the experience complete. That's why so many people left the conference, as Ted reported.
From Sluggo
http://www.npr.org/templates/story/story.php?storyId=4863138
Oyster fishermen and hurricanes
From Sluggo
Rita's looking as bad as Katrina, depending on where it lands and how much it weakens by then. When I woke up yesterday morning it was category 3, when I got into work it was 5, and when I left our guys in Baton Rouge were on the edge of evacuating. This morning they were packing.
The media is treating these as two flukes, but there are some dozen storms forming in the Atlantic, so a reasonable chance at least a few of them will be nasty. I would just go to the northeast or midwest for a couple months to avoid the in-again-out-again.
[Ben] [sigh] I wish. If NOAA had upgraded their estimate sooner, or if everything on the boat had worked, or even - now that I've got the welder, etc. going - if I didn't have to go to L.A., we'd be on our way north right now. As it is, well, all I can do is hunker down and hope.
Also note that it hasn't been much of anything in the Atlantic, at least not in the classical sense - the typical pattern being a low moving off the coast of Africa, picking up heat and water off the edge of the ITCZ, getting spin off the Coriolis effect, and making its first appearance on Broadway somewhere around, oh, 17N65W. Instead, it's been "depression forms in the Bahamas OH MY GAWD IT'S A HURRICANE!", with NOAA having damn near zip lead time on these things - of course. Totally not their fault, but not having that two-to-three week window to watch these things mature is unsettling, to say the least.
There seem to be three factors converging:
1) Climate change (we're not supposed to say g%*@!% w*%!!&) is increasing the temperature fluctuations, exacerbating the storms.
2) We're coming out of a 30-year calm cycle for Atlantic storms. So what's "extreme" now is people's short memories. I think we've assumed that the calm period during most of the 20th century was "normal".
[Ben] [Nod] I'm definitely aware of the historical perspective, having studied the weather fairly intensively. Yeah, it's been a very quiet time, comparatively.
I feel the worst for the Carribean islands. They get most of the storms passing through every year. And they don't have a dry place to flee to.
[Ben] "De hurricane, she do blow." There's two very effective ways of dealing with a 'cane there - you either stay close to the water and build 6'-thick walls out of limestone (coral), like the Dutch did at the St. Thomas waterfront, or you go high and build large houses with huge shuttered windows and a strong below-ground basement (with water storage), as they did at Mountaintop. In the second case, when the 'cane is coming, you take all your stuff into the basement and _open the shutters_ so your house offers nearly no resistance to the wind. After the 'cane, you sweep out the leaves and bring your furniture back up. Needless to say, bamboo and other light construction for said furniture is strongly preferred...
Standard US construction - balloon frames, sheet rock, etc. - is beyond stupid. And yet, most people in the islands do it - because those are the construction methods and materials that are available.