Editor: Michael Orr
Technical Editor: Heather Stern
Senior Contributing Editor: Jim Dennis
Contributing Editors: Ben Okopnik, Dan Wilder, Don Marti
Send tech-support questions, Tips, answers and article ideas to The Answer Gang <tag@lists.linuxgazette.net>. Other mail (including questions or comments about the Gazette itself) should go to <gazette@linuxgazette.net>. All material sent to either of these addresses will be considered for publication in the next issue. Please send answers to the original querent too, so that s/he can get the answer without waiting for the next issue.
Unanswered questions might appear here. Questions with answers--or answers only--appear in The Answer Gang, 2-Cent Tips, or here, depending on their content. There is no guarantee that questions will ever be answered, especially if not related to Linux.
Before asking a question, please check the Linux Gazette FAQ (for questions about the Gazette) or The Answer Gang Knowledge Base (for questions about Linux) to see if it has been answered there.
Is there any way that i can sync or saved my netware user password into the samba password file so that it will allow authorised user to map drives for furture use
Hi there
During development of a word processor in Oriya language ** I faced the following problem.
The character coding of oriya lies between 128 to 255. Also the keyboard mapping I need is different from the default keyboard mapping that is US_English.For typing and displaying those oriya character I need changing some kind of keyboard mapping.Could you please suggest any method available in GTK+/Gnome to change the default keyboard mapping ( only inside the application). I tried the same using XChangeKeyboardMapping function. But it changed the keyboard mapping for the entire session throughout all applications.Is there any alternative to it ? Anticipating a response from you.
Regards
Girija
I just hooked this printer up yesterday. Overall, it prints fine with one exception. At the end of a page, both lights start flashing. I believe this means some sort of paper error, like a jam or something. After each page I have to reset the printer. BUT, this is only 100% reproducible when trying to print 2 or more pages. If printing a single page, sometimes it errors and sometimes it doesn't. This same printer worked fine connected to a MAC. The difference, beyond the obvious, is the MAC was connected via USB and the linux machine is running it in parallel. Any help appreciated since it's pretty annoying to have to print a songle page at a time.
Alan Brady
First of all, thanks to all the people who write for and maintain Linux Gazette!
Now to my question:
I want to configure the key 'F2' for Kmail so that when I'm composing e-mail and I press the 'F2' key, the phrase 'Kilroy was here' is inserted at the current cursor location. How do I do this?
I'm pretty sure it has something do with Xresources, but I don't know how to set it up.
TIA for any help you can give me on this.
-- Rod
P.s. I'm running Mandrake 8.2, with KDE 2.2.2 if that is at all relevant to the answer to the question.
Hi
I am trying to get my pppoe client to work.I am on the debian distribution version 2.2.18pre 21. I am using the roaringpenguin client. there is a continues failure when i try to log in. The ppp0 interface come up but I can not tell if the system is logged into a ppp server. I typed to turn on the debugging on the pppd but the system writes some garbage and nothing seems to happen. When the system try's to fire up it trys a ppp connection down a serial line, where in the config file the maps the ppp connection to the eth0 interface? How is it possible to to tell if the system is logged into a ppp server? When I run the pppconfig script I cant work out the 4 text parameters the script is after, I only know the user name and password. The PPPd program inherently deals with a serial modem, how do I configure this to use my ethernet card?
My provider is Bigpond in Australia and they use pppoe for authentication.
My user name and password are both in the pap and chap secrets file, is there any need to repeat these in the ppp options file
How can I manually debug a ppp session, can I enter all the ppp config parameters by hand?
a snip of my sysylog is pasted below. Can you help - Im a real newbie!
See attached syslog.txt
I've searched Google groups and various mailing lists and I've found several people with the same problem as me, but no solutions to this. I'm running XF86 4.1.0.1 on several Debian Woody xinerama 2 monitor boxes (with several different combinations of video cards) and I can't find a way to post a background image centered across both screens with a single image. I can get an image to center on the left monitor and the right monitor has the same section of the graphic showing (the left half) on the right side of the screen.
-------+-------- | | | | 12| 12| | | | -------+--------
This is the behavior with xv, xloadimage, feh, gnome control center, gqview and other image viewers. The odd thing is that for applications that use transparency (gnome-terminal, xchat-gnome), the transparent image is correct, so the transparent right screen has the correct transparent image, but not the correct background image. I can send a screenshot showing this phenomena if you like. Another odd behavior is that small tiled images tile across the middle correctly (both background and transparently). My question is how do I make an image center across both screens correctly like below?
-------+-------- | | | | 12|34 | | | | -------+--------
Thanks, Matthew H. Ray
Hi Matthew!
I once had enlightenment set up as xinerama and managed to get what
you want: the image across both screens, and it was even with
different resolutions on the screens: 1024x768 and 1280x1024.
I managed to get it (IIRC) in the enlightenment background settings
menu, by wildly fiddling with the sliders that are up/down and on the
sides of the image in the upper part of the control window.
But that was enlightenment, dunno how to do it in the other wm's ...
Robos
Hello, Has anyone a solution on how to use the pivot functionality for tft-screens under Linux ?
Greetings
Chris de Boer
Greetings, Chris; what's a "pivot functionality?" If you describe it, we might know it. [Ben]
You're not an old-time-enough-Mac guy, I suspect, to recognize the term as generic, Ben: many current generation LCD panels, notably including the Viewsonic's, will pivot on their center axis, becoming vertical.
Even hearing the signal from the panel, much less figuring out how to remap everything to a new screen size, is likely a non trivial problem...
A couple of quick Google searches didn't turn up anything suggestive...
Cheers, Jay R. Ashworth
Attn: Mike Orr
Hello, Mike - We exchanged a few e-mails last year re/ Linux news. I'm wondering if you can point me in the right direction. How would I go about determining which US-based Linux user groups are the largest, or the most influential? Registries I'm finding online don't give me an idea of size. Are there, say, 5 or 10 groups that are known within the Linux community as being the "biggies."
Thanks for your insight,
Katherine Gill
[Don Marti]
SVLUG: http://www.svlug.org
NYLUG: http://www.nylug.org
ALE: http://www.ale.org
NTLUG: http://www.ntlug.org
[Mike "Iron" Orr]
[Note to The Answer Gang: I'm forwarding this even though we don't usually answer marketing questions (the querent sends in press releases to News Bytes) because it asks a question I haven't seen covered elsewhere, a question that will be of interest to many readers.]
Fair 'nuff -- Heather
[Mike "Iron" Orr]
Hi, Katherine. I remember your name although I don't remember what we talked about. I don't know of any statistics on user group size. BALUG (http://www.balug.org)in San Francisco and SVLUG (http://www.svlug.org) in the Silicon Valley each used to get four hundred people per meeting as of a few years ago, but I don't know about now. Those two are pretty "influential" in terms of offering services and being activists. (E.g., SVLUG threw the Silicon Valley Tea Party (http://www.svlug.org/events/tea-party-199811.shtml) in honor of the release of Windows 98 [wasn't that nice of them?], and crashed Microsoft's big demo, "respectfully" wearing their penguin T-shirts and passing out Linux CDs.) But really, user groups in general don't influence Linux in any way. What they do is make Linux more accessible to their members.
Not sure where you're hoping to go with the statistics, but I question the value of having them; without setting values on "influence" I wonder who will care about the factoid, and your research efforts might have been spent elsewhere. Nonetheless I'll give it a poke.
As an SVLUG member I can add some comments, mostly general. At some time in the past we had an ongoing list-bourne argument about who was "the largest LUG in the world". Members of two LUGs in entirely different parts of the world started to claim this, approximately simultaneously. Some of the grist included the more detailed question, what kind of members did you want to count? Those who attend almost every meeting and regret when they can't make it? The sum of those who attended any time last year (knowing that "the regulars" are of course duplicates)? Average meeting attendance? Oh but we have these regular installfests too and nobody counts there 'cuz we're busy. Oh but anybody on the general mailing list is really a member -- and boy, do we have a lot of lurkers. Then how did you want to count influence? And influencing who?
As some started to get bitter about it, 'twas noted that a fight on some stupid label certainly wouldn't help the community at large, and both really changed over to "one of the largest". I forget who the other was; they're not in my region and I'm a busy soul, so I don't even recall if they were also in the U.S. Why? Because it wasn't as important as us all getting on with our Linux-y lives. See my past editorial about "the coin of the realm."
In the world of Linux "influence" is not based on size, but on the aggregate effort of individuals. An occasional individual is "big" in the sense of having an extra degree of talent -- and eventually heaps of extra respect, built up slowly over time -- a factor my SF-convention running friends at Baycon (www.baycon.com) call "people points". Just being a plugger and helping as one can can stack them up eventually too, though.
Do you mean "influential" like as in political efforts? Heh. Better to ask the Electronic Frontier Foundation (www.eff.org) instead. But they won't know so much about the OS preferred by any individual member, as about the bills that are out there planning to prey on every nerdly soul in the country (and many who aren't as it starts taking toll on ability to use the internet). Oh yes, SVLUG members have been involved in a few rallies here and there. And I'd love to see a notable bloc of senators throw all their weight against the SSSCA because "statistics show" that the amassed geeks of the Silicon Valley are deadset against it. (One of these statistics being California among a limited batch of states that think Microsoft's "settlement" isn't worth a bic pen.) And the DMCA otherwise known as the "only big label companies whose policy about their copyrights is You Sure Better Not are allowed to protect theirs, you multitudes whose policy is My Grandma Recipes Can Belong To Every Mom can go rot." And so on. There are hundreds of poisonous little bills a year and the politicos simply don't even visit the world we actually live in.
Well what the heck. Maybe a "top ten" statistic would actually help. Good luck, and wish us some while you're at it. -- Heather
Thanks, kindly!!
As seen at http://linuxgazette.net/issue69/henderson.html
Thanks for writing this exelent article, but i wonder i you can give me any pointers to how to make X-window log in and autostart. I use a debianized laptop, and having to log in every time i start up is quite unnessesary. I know mandrake has this option, but i cant find info on how its set up.
Hoping that if this is not the right place to ask, you could give me feedback as well.
Thanks again
Stian Vading
[K.-H.]
the article is describing how to automatically login for textlogin. You can easily place "startx" in your ~/.profile and so automatically launch X and your standard window-manager.
For using that qlogin you probably will have to switch your debian system from graphic login to text login.
Another possibility: It is possible to run more then one X server at once, you could let it start the normal login screen but at the same time run qlogin to log in automatically and start it's own X server on a different virtual console (like vt . If this happens later then the gdm (or whatever debian is using for graphical login) it will switch there automatically.
[John Karns]
Right you are - I forgot to consider the consequences of a ?dm boot configuration. The 'startx' approach indeed assumes a text-based console boot configuration.
Can i distribute Linux Gazette (all issues as were avaiable) on a CD rom that i was going to design with open source software ?
Vijaya Kittu M
Yes. -- Mike
Hello,
I am not sure if you understand really the meaning of words:
etiquette and vulgar.
The Linux Gazette should conform to the first meaning and so exclude everything from the second meaning. Please refer to etiquette book from the nearest library.
Your public answer should never go to people like this one:
i just came across your website and was looking up bad clusters also.i've seen some of your replies to theses people and you seem pretty cocky. you sound like a total dick, like you dont have the time to just be nice and say geesh im sorry but you have to look elsewhere.
even if you want to personally "punish" him, even if he would be right or wrong.
It would be good behaviour if you simply correct those public pages and ban vulgar words.
Sincerely,
Marko
We censor words like f*ck and c*nt because LG is an all-ages publication. We do not use words like damn ourselves because several readers complained about it several years ago, but we don't think it's necessary to censor it from the occasional readers' mail. Obviously, people can differ over which words belong in the first category and which in the second.
In any case, that issue was published over a year ago and this is the only complaint we've received.
LG has never claimed to be the Emily Post of Linux. Our goal is to provide technical information and to make Linux more fun. Letters are published or not published according to their overall message, not whether they contain certain words. -- Mike
[Thomas Adam, the LG Weekend Mechanic]
I would just like to re-iterate the comments that Mike Orr made in this e-mail by saying that the querent (that's the person that sent that "abuse" e-mail to us) never actually sent an e-mail to us, asking a question that pertained to Linux.
Indeed, many querents that e-mail us, don't actually bother to really check to see who or they are really asking their question to.
Thus, we get a lot of Windows questions that have no relation to the subject matter contained within the Linux Gazette.
I do not consider the replies to peoples' e-mails rude in the least. Yes, harmless banter (Oh...hi Ben does take place, but it is really only because the querent has asked a really stupid question, or it is because of the reasons already discussed.
For example, I could be really picky, and say that the phrase which you used:
"Please refer to etiquette book from the nearest library."
is nonsense. It is grammatically incorrect, since it should read:
"Please refer to ***an**** etiquette book from the nearest library"
but who am I to complain???
Should you have a question relating to Linux, then please send it to the list.
Regards, -- Thomas Adam
It may be noted that we no longer publish all messages that come to us, nor threads with no Linux (or LG related) content even if we do sometimes answer their questions successfully. -- Heather
Dear Sir/Madam
I would like to know from you answers of 2 Questions:
Strictly speaking, these are publishing questions, not Linux questions, but I cheerfully answer questions about LG itself anyway. -- Heather
Is 'Linux Gazette' is itself a Jouranal(professional)?
No. It's a web zine produced by volunteers. -- Mike
Linux Gazette is hosted by SSC.com, the internet site of Specialized Systems Consultants, Inc, a professional publishing company which publishes cheat cards, maybe some books, but definitely the standard print magazines Linux Journal and Embedded Linux Journal.
Although mirrored in approx. 47 countries, carried in nearly every major distribution of Linux on the planet, translated to multiple languages monthly, and the license we use allows it, there is not to my knowledge anybody publishing print editions of the Linux Gazette on a regular basis. If you know of such please let us know and we will be glad to give them a place of honor on the mirrors page: http://www.linuxgazette.net/mirrors.html
The staff and columnists of Linux Gazette are unpaid volunteers. Other than that we try to provide a high quality 'zine. We have been published monthly since... (she steps aside to check the Table of Contents) ... Sepetember 96 (not all issues before that were monthly) and there have been a few mid-month special issues.
Some of our staff have attended large shows in a professional capacity as press. You'd have to look back through our editorials for the references.
Linux Gazette is a part of the Linux Documentation Project, a worldwide effort to provide usable documentation for many things one might want to do with Linux. -- Heather
Is 'Linux Knowledge Portal' is a professional Joural?
Hmm, hadn't heard of this one before; Google! reveals: http://www.linux-knowledge-portal.org -- Heather
I hadn't heard of it ... And since we do publish a professional journal (Linux Journal), I asked LJ's Editor, and he hasn't heard of it either.
I did a Google search and discovered that http://www.linux-knowledge-portal.org exists. It used to be the SuSE Linux Knowledge Portal. If you want to know whether it's a professional journal, why don't you ask them? It also depends on what you mean by "professional journal", and why you care.
If you want to send an article, advertisement or press release to Linux Journal, see http://www.linuxjournal.com/contact.php . -- Mike
An interesting looking news site, a little ugly in lynx but definitely usable. Not hosted by SSC so our hosts couldn't say anything to its status. I'm not involved with it myself, so what follows is merely my opinion. I'm good at having opinions on things
It appears to depend heavily on automated retrievals from other sites which produce news in the Linux world, freshmeat and slashdot for instance. It seems professionally maintained to me though this is purely a gut reaction to usability at the site. The "Help" button mentions that it is themeable to your personal tastes if you let the site use cookies. Too bad there's no About section.
The question of whether a newspaper is a real newspaper if they have no investigative reporters and only read AP/Reuters, is a philosophical one beyond the scope of our site. But if you find an answer to that question, I'm sure the same answer applies here.
It is, however, fitting the common definition of "Portal" to a T. -- Heather
I would be grateful for your response.
Regards Touheed
Since I cannot determine your definition of "Journal" and "professional" in this context, I can't tell if either of these answer your question.
If your question is actually, "can I get paid for writing for Linux Gazette" I'm afraid your answer is no. Consider the Linux Journal instead.
If your question is actually, "can I use getting published in Linux Gazette as part of my Curriculum Vitae, resumé or to satisfy a publish-or-perish imperative at my academic institution?" the answer is almost certainly yes. You may want to consider our submission guidelines at: http://www.linuxgazette.net/faq/author.html
Use of a spell checker would be advised. The motto of our 'zine is "Making Linux a little more fun!" and so writing in a style readable by a lot of people is preferred.
As for Linux Knowledge Portal, perhaps you should ask their webmaster.
Hope you found that interesting; not sure if it's useful. -- Heather
You still have time to submit artwork for the contest introduced in last month's Back Page.
Well, I found a solution - but that solution is part of a package that's interesting for more reasons than one. AccessControl, a package of useful tweaks designed to help folks with disabilities, had what I needed and more, along with a control panel that pulled it all together (of course, the individual utilities could still be used as stand-alone programs.) It's available at <http://cmos-eng.rehab.uiuc.edu/accessx/>.
Interestingly enough, Dan Linder (the author) says that a similar panel has been incorporated into X11R6.6 - a Very Good Thing, in my opinion. However, for those of us who'd like (or need) a bit more control over our keyboards, mice, display, etc. and are not willing to chase the bleeding edge, this package can be a useful tool in the sometimes confusing "battle of the interfaces".
After going back to my tried-and-true "icewm" (KDE was just too bloated for my 366MHz/64MB laptop), I gave a bit of thought to "URL clipping", which - if not over-automated - could be a handy feature indeed. Then, I remembered the "xclip" utility.
See attached clipurl.bash.txt
All that was left was tying "clipurl" to a key sequence in "icewm". To do that, I simply added the following line to my "~/.icewm/keys" file:
key "Alt+Ctrl+u" clipurl
Now, when I select a URL and want to launch it, I press "Alt-Ctrl-u", and - presto! A new Netscape window pops up (if Netscape is already running, it spawns a new one). It also works for files in your home directory, or "clips" that contain the entire path as well as the filename.
One of these days, I might write a little "chooser" for "ftp://", etc. URIs... but so far, it hasn't been a problem.
My tip concerns the CUPS configuration utility that is accessed through the webbrowser at http://localhost:631/
My default browser, galeon, takes awhile to start on my machine. If all I want to do is run the CUPS interface to change a printer parameter, then it's much quicker to call it up with the w3m webbrowser in an xterm. Though text based, w3m even supports inline images. I put a "printer" button on my gnome panel that launches the following command when pressed:
"xterm -title CUPS -bg black -fg white -geometry 110x46+240+50 -fn 7x14 -e w3m http://localhost:631/printers"
Steve Robertson
Hi,
I'm the editor of the 'Gazeta do Linux', the portuguese version of Linux Gazette. We received the attached email with a question for you from Alfredo Guimaraes Neto.
Cheers, Pedro Medas
Ola,
Gostaria de saber se voces teem um tutorial de como mudar a imagem de
inicializacao do linux, aquele pinguinzinho com um copo de cerveja, pois
tentei varias vezes e estou com dificuldades, quando mando compilar o
kernel, da sempre erro nesse arquivo.
Grato, Alfredo
Hi,
I would like to know if you have a HOWTO to change the boot image of linux, that penguin with a beer cup, I tried several times and I'm having difficulties, when I try to compile the kernel, it reports always the same error.
Greetings,
Alfredo
Thank you Pedro. I have an answer for him. If you would be kind enough to translate it back I think he'd appreciate it. -- Heather
Hi Heather,
Thanks for the answer to the 'Two Centavos Tip'.
I will translate it for him.
If you need any more info or help feel free to say so.
bests,
Pedro
Not precisely a HOWTO, but actually useful instructions, are at the Linux Kernel Logo Patch Project: http://www.arnor.net/linuxlogo/download.html
Apparently you are not the only one in the world who is inclined to change the boot logo, but finds it hard to figure out where you would tweak the kernel code to use your own. So these people have a patch that makes it easy for everybody, not just kernel-hackers, to put in a new image.
I think they're looking for help on getting the non-intel platform logos right.
For my own part, I like it, I think I'll be using it soon myself!
Hi Mailgang,
Concerning the question of Donal Rogers (rogers from clubi.ie) in the Mailbag of LG76 I found the following in: http://users.pandora.be/sim/euro/112/kde/kbdandbdf.html http://www.interface-ag.com/%7Ejsf/europunx_en.html
So: you may start a new xterminal screen with the Euro-enabled font:
xterm -fn -misc-fixed-medium-r-normal--13-120-75-75-C-70-ISO8859-15 &
In this terminal you can use the Euro-symbol (eg. echo -e "\244"). The question I cannot answer is: how do you force all of your applications to use this font (if indeed that is the best solution). But I hope it gives you something to start working with.
--
groeten,
Rene van Leeuwen
Hi,
Please kindly advise me on PPP.
I'm using RedHat 7.2, somehow I having difficulties in getting the modem setup and recognized.
I compiled the new kernel with PPP add-on: Network Device Support -> (Y) PPP Support -> (Y) PPP Support for async serial ports
1. My external modem was connected to com1, so when I echo > /dev/ttyS0, my TR on modem get lighted.
2. I set; setserial -g /dev/ttyS0, it shows: /dev/ttyS0, UART: 16550A, Port: 0x03f8, IRQ: 4
OK - those numbers look fine, and the above test says that you're definitely on the right port.
I ensured that IRQ 4 is not used by other program by cat /proc/interrupts
3. When I performed; wvdialconf /etc/wvdial.conf, the results show ttyS0 modem was not found.
I tested out on 2 external modems, same problem arise. but of course my both modems (one of them was MERZ 566) were in working condition.
Where did I went wrong?
As far I can tell, you didn't; "wvdialconf" does not guarantee to detect all modems. Try using "minicom" to test it: do the serial port setup (it's pretty self-explanatory) and see if the modem will respond to simple commands like "AT" (it should come back with "OK"), "AT&V" (show the profiles), "ATDT5555555" (dial those numbers), etc. If it responds, just use those values in your "/etc/wvdial.conf", and everything will be fine.
xmodmap -e "pointer = 1 3 2 4 5"
If that works for you, you can place the expression (the part between the double quotes) in a ".Xmodmap" file in your home directory - or launch it directly by specifying the entire command line in your "~/.xinitrc" or "~/.xsession" file, depending on how you start your X session.
Hi,
In the 2 cent tips from LG 77, Chris Gianakopoulos writes:
"It is my belief that Net4, although it may be influenced by other
protocol suites, was written from scratch (other han being derived
from NET3.)"
I read recently in Linus Torvalds' "Just for Fun" (and again in in Glyn Moody's "Rebel Code") that the TCP/IP implementation in Linux was written from scratch in order to avoid being hassled by AT&T, who owned UNIX at the time. I suppose AT&T was using their legion of lawyers to go after other UNIX implementors for royalties.
Thanks,
Brian Finn
Hi Brian,
That makes sense. I've read somewhere that the book, "The Design of the Unix Operating System" by Maurice Bach, influenced Linus Torvalds with respect to his Linux stuff. The book described the algorithms of System V Release 2. Of course, other stuff influenced him also. Thanks for that info, Brian.
Regards,
Chris G.
Hi there Ben,
I am responding to you as you were first on the list of answer people:-
I refer to "ntfs clobbered my ext3fs!!" in Linux Gazette 77 in which the questioner asks about a partition overlap.
I have encountered this twice. Both times it has been with a mixed Windows/Linux drive and using automated partitioning (ie Disk Druid or DiskDrake). Your questioner has exactly this scenario.
Now, I never use automated partitioning and I partition the drive using parted before I start the installation. I use primary partitions where possible and avoid mixed Windows/Linux disk setup.
I have experienced the overlapping partition syndrome and have found it very difficult to overcome. I have not been able to sort it out using fdisk as either Linux or Windows fdisk can not do anything to such corrupted partitions. I have only been able to recover using disk manager software and this was a destructive recovery.
Regards
Frank Brand
Hi there
I would like to know how to set up my email on my home network with win98 outlook express and Linux.
I would like to set it up so that I can email anybody else in the house on the network and email via the internet when needed.
Thank You
Cheryl
There are a couple of linuxWorld articles describing Nicholas Petreley's setup, which may be suitable for you requirements.
http://www.linuxworld.com/site-stories/2002/0318.ldap1.html
http://www.linuxworld.com/site-stories/2002/0401.ldap2.html
Simple question: What is a ".RPM" and how do I use them. I assume they are a type of compression file, but what do I need to use them.
RPMs are RedHat Package manager files. They contain the necessary files for a package, including setup scripts to be run pre- and post-install. They also have a list of dependencies, so they can determine whether you have installed the other packages on which this one depends.
Simple usage
rpm -Uvh pkg.rpm # install package from pkg.rpm rpm -Fvh pkg.rpm # freshen (update) package from pkg.rpm
In both the above examples v is verbose and h is using a hash mark progress indicator.
- For examples of other usages see
- http://www.getlinuxonline.com/omp/distro/RedHat/rpm.htm
Neil Youngman
P.S. If you're asking questions of this list, please turn off MIME and HTML.
Hi,
Check out www.linuxtoys.com. This site has some great examples of how to read/write form serial ports in linux.
The
Radio Shack DVM with RS-232 <http://www.linuxtoys.com/dvm/dvm.html>
article was of particular use for me.
Good luck,
G Wozniak
Hi,
check out the Serial Programming Guide for POSIX Compliant Operating Systems at http://www.easysw.com/~mike/serial You can find the answer in chapter 4.
Best regards,
Matthias
Why when starting SSH client does a subset of sftp open up in the background by default?
Take a look at the last line of your "/etc/ssh/sshd_config":
Subsystem sftp /usr/lib/sftp-server
Also, from "man sshd":
Subsystem Configures an external subsystem (e.g., file transfer daemon). Arguments should be a subsystem name and a command to execute upon subsystem request. The command sftp-server(8) implements the "sftp" file transfer subsystem. By default no subsystems are defined. Note that this option applies to protocol version 2 only.
I find the next-to-the-last sentence very interesting... on Solaris, for example, it's defined but commented out. On Debian Linux, it's defined and enabled by default. I suppose you could turn it off by commenting out the line, but I'd make absolutely certain that I didn't have any need for it first.
Hello everybody,
I have emails with a MS-TNEF file and a humor.mp3.scr file as attachments waiting in my inbox. How do I view/listen to these attachments?
You really don't want to open humor.mp3.scr. That's the Badtrans virus! Fortunately, as a linux user you're immune
See http://vil.nai.com/vil/content/v_99069.htm for more info.
Neil Youngman
As a general point, anything which has two whole three letter extensions (.jpg.pdf, .mp3.scr, and so on) especially when the second is one that may be reasonable to auto-view, you should be immediately suspicious that it's probably a virus. The same goes for MIME types which represent auto-view type files but which do not match the extensions given on the attachment (e.g. audio/wav but the attachment says .jpg).
However, there are 4 or 5 different small utilities that will deal with a true "TNEF" attachment, easily found at freshmeat.net -- Heather
On Fri, Apr 12, 2002 at 06:02:39AM +0100, Alok Garg wrote:
Hello Sir,
I have 2 HDD of 20 Gig each, on the Primary drive I
have WinNT and on the secondary I have Linux RH 6.2 I
wanted to uninstall Linux from the system without
effecting my data on Win NT. I wanted to move my
secondary drive to other machine.
I'm sorry, but that's impossible. Removing Linux from your machine would utterly destroy (beyond any hope of recovery) the data on every WinNT machine in a 60-mile radius of where you are. Note that everybody will know exactly who is responsible: you'll be left in the center of a large charred circle. Even if you removed the HD with Linux and carried it off, as soon as you erased it, your NT would know.
It all happens magically, really.
(HINT: There's no magic. NT may be evil, but it does not watch your Linux drive and explode if anything changes.)
See <http://www.linuxgazette.net/tag/kb.html#uninstall> for tips on uninstalling Linux.
Make sure sshd is "always" there for you.
Using OpenSSH (circa 2.95 or later?) you can configure the sshd to run directly from your /etc/inittab under a "respawn" directive by adding the -D (don't detach) option like so:
# excerpt from /etc/inittab, near end ss:12345:respawn:/usr/sbin/sshd -D
This will ensure that an ssh daemon process is always kept running even if the system experiences extreme conditions (such as OOM, out of memory, overcommitted memory) or a careless sysadmin's killall which kills the running daemon. So long as init can function it will keep an sshd running (just as it does with your existing getty processes).
This is particularly handy for systems that are co-located and which don't have (reliable) serial port console connections. It just might save that drive across town or that frustrating, time consuming and embarassing call to the colo staff, etc.
If Python's built-in recursion limit keeps your incredibly cool recursive function from working, you can temporarily set a different recursion limit with the sys module.
oldlimit = sys.getrecursionlimit() sys.setrecursionlimit(len(big_hairy_list)) incredibly_cool_recursive_function(big_hairy_list) sys.setrecursionlimit(oldlimit)
If you have an account on a system where only your ssh1 key is installed in your authorized_keys file, you can force your ssh connection to use version 1 of the protocol with ssh -1 example.com.
Then you can use scp with the -1 option to transfer your ssh2 key there, so that you can use version 2 to connect from now on. Paranoid sysadmins are turning off version 1 access, so you should be using version 2 everywhere by now to be on the safe side.
To make executables smaller, try running strip(1) with the options -R Comment -R Note. This removes "comment" and "note" sections that the compiler and linker may have added during the build process.
(source: MontaVista Software's MontaVista Zone customer support site.)
If you're running your headphones straight out of your sound card's "Line out" jack, you might notice there's no volume control. Instead of trashing your ears or firing up a audio mixer every time you need to set the volume, just bind the commands
aumix -v+4 # crank up the volume!
and
aumix -v-4 # turn that crap down!
to two spare function keys. (In Sawfish, this is under the "Bindings" menu in the sawfish-ui program.) Presto--free and easy volume control straight from the keyboard.
There are also nifty little volume control applets for the KDE and GNOME taskbars, but why spend pixels on a common task when you have all those keys just sitting there?
Hi Mom!
(I couldn't resist)
Hello everyone and once again welcome to the world of The Answer Gang. We had around 500 messages come through, the peeve of the month seems to be a few people overdosing on their sense of humor, and in case anyone was curious... my printer works fine these days.
I'm sure some people are going through Spring Cleaning. In my case I'm cleaning up my hard drive. I got a much, much bigger one and used my new distro installation as an excuse to perform the reorganization at the same time. This effectively turned an afternoon's task into a couple of days of juggling bits and an occasional adventure throughout the month to correct one or another facet of the installation.
At this point all my virtual hosts work, and I've finally gotten over how much easier elm is than mutt because I'm successfully using hooks to make the silly thing much brighter about what folders to save things to. For my style of folder reading this is perfect! Now all I have to do is whap those "elm2mutt" people for writing a converter that doesn't work if elm is already gone and you only have the aliases left. Sigh.
In fact I'm planning to leap feet first into the new development cycle over at LNX-BBC.org. Nick has this cool new build system and when we're done the thing really will be able to make world on itself, I think.
I'm pleased to see that kernels are settling down to some pretty usable stuff. Soon I'll be able to trust it on ultrasparc and maybe update our production server. Meanwhile, a nice solid 2.2.x kernel for us, yes indeed.
That's one of the things I like best about Linux, actually. Nobody holds a gun to your head and says that you have to use the latest and bleed all over that bleeding edge. If your sound or your pcmcia card just doesn't work right under the new stuff - great, stick with what works. Userland is a seperate thing, you can upgrade it by some fairly small parts most of the time. Of course glibc is a tangled mess, but then, it pretty much always was...
Later this month (Memorial Day weekend, for those of you who follow US holidays) I'll be running the Internet Lounge at Baycon, a science fiction convention. It'll be a nice tribute to how well older systems hold up with Linux under the hood. If you happen to be in Northern California around then, feel welcome to drop on by.
See y'all next month, folks!
Answer By James T. Dennis
Do you have a stack of Linux machines in a server room or at a co-location site? Do they all have serial consoles hooked up to a reliable terminal server? Or, is it that you can't afford to buy one of those cool Cyclades or other terminal servers, or your boss won't let you take up valuable rack space for one?
Depending on your answers to these questions you may qualify to use the unrevolutionary, completely unpatented "serial buddy system" Just take (or make) a few inexpensive null modem cables (n+1 for n machines) and link the systems in a chain (COM1 on System X to COM2 on System X+1 and around to System 0 to form a loop).
Install minicom or ckermit/gkermit, and mgetty, agetty, or uugetty (any getty that's capable of null modem -- serial, operation) and add the appropriate lines to your /etc/inittab, and option to /etc/lilo.conf or your grub configuration files (to pass console= directives to the kernel(s)) and (also optionally) compile your kernel with serial console support.
(The gory details are left for more detailed treatises such as http://www.tldp.org/HOWTO/Text-Terminal-HOWTO-17.html#term_as_console and .../linux/Documentation/serial-console.txt --- wherever your kernel sources are stored).
The end result of all this is that, when you need to look at the console of any machine, you can use a terminal package (such as minicom, or ckermit/gkermit) on the machine "next to" your target. This is much less flexible and convenient and a bit less robust than using a good terminal server --- but it's better than driving across town to the colo facility just because you're reboot failed, or you have to pass some new option to your (possibly new) kernel, or whatever). It's predicated on the likelihood that you won't manage to munge all of your machines at once.
Cheap:
you can get null modem cables for less than $5 (U.S) (Better you can make your own RJ45 to DB-9 null modem adapter pairs and use normal ethernet patch cords, in a wide selection of colors to connect them! That keeps the rats nest behind you machines a tad more manageable).
Available:
you probably already have a couple of spare serial ports on that server, anyway (and some of the new kernels even support USB serial console drivers!)
More Available/Robust:
some PC motherboards support serial console right into their CMOS set-up --- so you can change the boot device, etc.
Fairly Robust:
No single point of failure? It's possible (with more advanced fussing) to force the getty's to be quiet. That should allow each of the null modems to be bi-directiional (a login could be initiated from either end by connecting to the line and hitting enter or sending a BREAK) (The trick is to force the getty's to wait for a line signal before issuing and login: prompt --- some of them have this option). Obviously systems with four serial ports can be cross wired for additional redundancy --- though only one port on any system can be the "console" --- serial getty's can be run on the others.
Did I mention CHEAP! This is way cheaper than by a Cyclades and paying the rackspace rent on it, too; and much cheaper than a PC Weasel 2000 (and spending a PCI slot on that!) and even cheaper than a set of KVM cables (not to speak of the KVM switch and rackspace consumption you'd devote to THAT).
BTW: you can also add a modem or two into the mix --- putting them on systems with a extra serial ports (COM3 or even COM2 on some system where you've got the "bi-directional, quiet getty hack" working). This can get you in to do troubleshooting even if you're network connection to the colo goes down. That's especially handy if you happen to have another null modem into your router's console! (As I: "I updated the packet filters on the Cisco and now we're locked out! Ooops!").
[And, if it's saved your butt a few times, but proves to be unbearable for other reasons (see below) it's easy to plug in that terminal server when you get your boss to pony up for it ].
Kludgy:
You have to remember which machines are neighbored to one another; you have to mark up your rack diagrams with another cryptic detail.
No centralized control, logging, monitoring etc:
There are a lot of advantages to a modern terminal system (in the case of recent Cyclades products --- the are embedded systems running a Linux kernel from flash and supporting ssh for network to serial gateway functions). The "buddy system" is much simpler than all that, but much less "featureful."
Works "well enough":
This approach may deter your boss/manager from letting you get that terminal server and "do it right." C'est la vie!
Answer By James T. Dennis
The Linux kernel supports a class of devices called "watchdog" drivers. These are programmable timers which are wired to a system's reset or power lines. They are common on non-PC servers and workstations and in embedded devices and are increasing included in PC PCI chipsets. There are also PC adapter cards that can function as watchdog timers, some of them are included in adapters with other functions (such as the PC Weasel 2000, or some high precision real-time clocks?) and some of them have electronics to monitor CPU or case temperature, power supply voltages, etc.
These all have one function in common, they can be set to some time interval (60 seconds by default, under Linux) and will count down towards zero. If they ever reach zero they'll strobe the reset line and force the hardware to reboot. Thus the require period "petting" or they'll "bite" you.
The Linux kernel supports a variety of watchdog hardware, and also includes one which is a software emulation of what a watchdog timer does. (Those are a bit less robust since some forms of kernel panic or failure might leave the system wedged and unable to execute the softdog code). (The Linux kernel can be set to reset after a time delay in case of panic --- the default is to dump a message and registers to the the console and wait for a human to read them and reboot. Read the bootparam(7) man pages and search for panic= for details on how to over-ride that).
All of this is of no use unless you also have a daemon or utility that can set the watchdog, monitor the system, and periodically "pet the dog." (Some texts on this topic use the more abusive "kicking" analogy --- but I find that distasteful).
Of course one can write one's own daemon, or even a cron job (if one over-rode the default 60 second value to be a bit longer, to account for possible cron delays). However, it's best to start with one that's already written and reasonably well proven. The Debian project has one that's simply called "watchdog." Although it is a Debian package it can be adapted for use on any Linux distribution.
This particular daemon performs up to 10 internal system tests (most are optional) and it can be configured to execute a custom suite of tests --- your own script or binary which must return a zero exit value on success (and should run in under some liberal time limit). In other words, it's extensible. On failure it can attempt to execute a custom "repair" script or binary, then it can try a soft reboot (with statically compile code -- NOT by calling the normal 'shutdown' or 'reboot' binaries). Failing all of that, it will simply fail to write to the /dev/watchdog which will cause the kernel to fail to "pet the dog" (hardware) or cause the kernel to reboot (softdog).
In (almost) any event a system failure should result in a reboot instead of a hang. That can be good for systems that are remotely located and hard to get reach. Of course Linux is pretty robust and reliable: so it's rare that the watchdog will be needed; and of course it could be that the watchdog will cause some spurious reboots, sometimes --- especially when initially configuring and tuning it. But there are cases where it's worth the risk and effort.
From Morgan Howe
Answered By Dan Wilder, Michael Gargiullo, Thomas Adam (the LG Weekend Mechanic), Ben Okopnik
LJ,
I'm almost a junior in college now, and I know I want a career in the computer field, but my real love is Linux. I also am really interested in networking and the internet, but there's just so many options its hard to make up my mind. I'm wondering if there is a good paying career for a Linux professional, and if so, what should I do in my last two years of college to prepare myself? I can't decide if I should go with an information systems degree, or just a regular CS degree. If I could just get any information about possible career ideas in the linux field, or even if you could point me in the right direction to find more information I'd greatly appreciate it, and you have my word I'll renew my subscription when it runs out.
Thanks in advance, Morgan Howe
Near as I can tell, the Linux Journal staff decided to send it to us and see if we could answer him better. I hope he, and anyone else out there job seeking these days, finds this useful. -- Heather
[Dan Wilder]
Most everybody ad SSC works full-time in Linux. IBM, HP and other major players are putting lots of money into Linux, and it seems to be holding its own as a web server platform while continuing to creep into the enterprise.
You might try keying "Linux" into a search of dice.com. Lots of spots for network administrators, web designers, driver writers, and others, last time I checked.
Your mileage may vary. A large Redmond company might prefer if there were no such thing as Linux, and though many of us have our opinions, in truth only time will tell.
[Michael Gargiullo]
There are more and more Linux based jobs out there. OK Granted the market isn't great right now, but more and more companies are realizing the benifits of Open Source.
Your school path should be based on what you want to do... Are you looking to write the next killer app or kernel module? If so go with the CS Degree, and learn good coding form.
As for the company in redmond...If you like them the do hire Linux professionals( The don't openly admit this) but a friend of mine who is a Perl genius and a strict Solaris guy just got picked up by them for their "enterprise email server project". Redmond might scream and shout that open source is evil, but they love and use it as well. Just remember, up until a few years ago, all of their web servers were running on *nix boxes. Another example, they have a software version control package, that is based off an open source package (They were even lame about it, all of the comands are the same but have the "ms" prefix).
Sorry I ran off on a tangent... There are jobs out there...
Good Luck Clean Code
-Mike
[Mike "Iron" Orr, LG Editor]
I'm in Seattle. The only places I can think of to search are:
- The job websites - http://www.monster.com, http://www.dice.com, etc.
- Your local hi-tech career fair
- Your local Chamber of Commerce
- Your local library
- Something else I was going to mention, but I forgot.
[Thomas Adam, the LG Weekend Mechanic]
(Well, this is the Linux Gazette (LG), not Linux Journal (LJ), but I'll let you off
Linux is becoming more and more popular with businesses these days. Certainly you should have no problem coming into "contact" with it.
...as for your CS degree...
I assume that you're an American. I am English and so cannot really say what your courses are like. I am 19 and am currently at University. I am doing an HND (Higher National Diploma) in Computer Science, which does cover some Unix aspects, if only basic. But it is a good sign that the course leaders here acknowledge the fact that Unix (and indeed Linux) is being used.
Any computer-orientated course should allow you the opportunity of using Linux. There is yet to be a degree here in the UK for Linux. However, software engineering which uses C, does use the Unix environment. So, you might get into Linux that way.
I would recommend going along to a local LUG to find out from the memebers there how they got involved with Linux.
There is information out tbere, especially on the internet.
I did a google/linux search and founf 1,2,9998 hits for Linux orientated jobs.
and you have my word I'll renew my subscription when it runs out.
[Thomas] I get the LJ too -- but don't feel obliged to re-new your subscription, just because I and Dan have helped you.
It has been a pleasure.
Good luck. Let me know how you get on.
[Thomas] I did a google/linux search and founf 1,2,9998 hits for Linux orientated jobs.[Ben] Is this that New Math I keep hearing about? Thomas, please send me your professors' email addresses. It's remedial classes for you, sir.[Thomas] Lol, I thought you'd like it Ben. Of course, don't tell the others it's really that secret KGB code that you've been after. I like the cover up of blaming my maths too -- nobody will ever suspect that our plan for world domination is near completion
Ok, seriously now though, I made a typo error.
Sorry, Mr. Okopnik, sir, it shan't happen again.....
--Thomas Adam
Answer By Murray Hogg, Dutch
Just a little tip which I've never seen before, but solves alot of the problems invovled in partitioning drives during a Linux install.
Rather than go to the trouble of partitioning the hard-drive on a functional Windows system (is that an oxymoron?) I simply placed it in a hard-drive caddy. When it came to installing Red Hat 7.2 I replaced the drive in the caddy with a second drive I happened to have from an obsolete system. Now, by simply inserting the appropriate hard-drive in the caddy, I can boot into Win98 or Linux with no more effort than it normally takes to use a Linux boot-disk -- assuming, of course, that your system BIOS allows to autoedetect the hard-drive on boot-up.
Just a few comments on the advantage of doing this;
It can be a cheap way of getting into Linux as it's actually cheaper to buy a new hard-drive and caddy for install in a new system than it is to go out and buy an old 486 or Pentium I (or whatever) -- it also takes lot's less desk space!
It has the advantage that the Linux and Windows installs are totally independent -- a crash on one has no chance of effecting the other whatsoever and it circumvents the problem that later versions of Windows have to be the only OS on a system.
The one draw-back is the need to add a second (third?) hard-drive to allow swapping of files between two OS's.
Finally, I'm not a developer or hacker, but I imagine using multiple hard-drives would also be a great way to experiment with new Linux distro's or versions (or even software packages) without risking damage to a known and trusted installation.
Hope someone finds it helpful, regards
Murray Hogg
Hi again,
I just recieved the following warning about the use of hard-drive caddies which I thought ought to be attached to my dual-boot system idea;
Thanks to "Dutch" for the following insights.
You make a few good points in your post. Now from 10 years as a hardware technician I'm going to inject a few cautionary notes.
1) If you are going to use a caddy system, be sure you get a decent one with solid, well designed alignment rails and good heavy duty connection pins. Over time the cheap ones can become mis-alligned and cause bent pins on the internal connectors. Best case the drive won't be recognized, worst case is a short causing damage to your system.
2) Along the same lines, most removable drive setups do not make solid metal-metal connections to conduct heat from the drive into the case where it is dissipated. So any caddy worth buying should have a cooling fan of some sort built into the tray.
3) Make sure to wait (usually a good slow count to 20) until your drives have COMPLETELY spun down before you remove them. Removing a drive that is still spinning is just asking for damage to the bearings, heads, etc.
4) Treat the removed drives with care (like they were delicate glass). I've seen people yank a caddy out of a machine and just drop it on their desk like a book. How long do you think something as delicate as a hard drive can take that kind of abuse?
5) Be extremely careful of static discharge, especally around the connection pins on the back of the caddy. ESD can kill a drive in a caddy very easily since the drive is not attached to any sort of protective ground.
Dutch
"I think therefore I am...usually in a lot of trouble."
From Steven
Answered By Ben Okopnik
Hey All,
We are running Red Hat Linux on a Compaq ML570 with four Xeon processors and one gigabyte of RAM. The server has two NIC cards, one compaq gigabit card and one 3com 100Mbs card. After some help from all of you, I have been able to successfully install and configure both NIC cards. However, I have found that after one hour of use, the gigabit card loses all connectivity, however, the 3com card stays up fine. We have tested this scenario several times, and the gigabit card is definitely dropping connectivity after about an hour. The only way to bring it back is to reboot the box, in which case they both work fine, but only for about an hour, then the gigabit loses connectivity again.
I checked out the Compaq website for a new driver, and there was one available, however, when I tried to build it with the 'make install' command from the created directory which contained the Makefile, I received an error message stating that he Kernel Source was not available. I took a look at the Makefile, and saw it was calling a 'linux' directory in /usr/src/ however, all I have is a 'redhat' directory in /usr/src/. I copied the contents of the 'redhat' directory to a new directory called 'linux' and still I had the same problem.
I am running out of ideas, and was hoping someone out there might have run into this problem before, either with multiple NICS or with Compaq RPMS.
Any info would really help!
Thanks, Staven
[Ben] It sounds like precisely what the error says: the kernel source is not available (and kudos to Compaq for making the error that clear; I've seen some absolutely st00pid error messages.) You're compiling a module (Linux doesn't use "drivers", at least not in the Wind*ws sense); modules get pushed onto the kernel, effectively modifying how the OS itself does Stuff. Therefore, you need to have the source code - module compilation depends on it.
Run "uname -r" to find out what version you're running. Download and install that version's source tree on your system; this will go under "/usr/src" as "kernel-source-<version>". Create a symlink called "linux" under "/usr/src" that points to your newly-installed source tree:
ln -s /usr/src/kernel-source-<version> /usr/src/linux
You should be able to run your "make" from here on.
(Obviously, you should delete your current "/usr/src/linux" before any of this - taking wild guesses of that sort can get you in trouble.)
Answer (as originally posted on linux-list) by Ted Stern
This content is actually from several messages originally from linux-list, and I have moved around parts for readability. I hope you all don't mind. -- Heather
The question was how to add a path for occasionally-used scripts without having to modify the PATH variable directly. Matlab has a command 'addpath' that does this. He tried to do it with a shell script, but of course that didn't work because it executes in a subprocess, and subprocesses can't modify their parent's environment.
The more people banging on modules the better. I think it would be great if all package maintainers could set up a modulefile to go with their installations.
Here at Cray, we are in the midst of a giant package installation sequence. Given that there are dozens of open/free/GPL software packages around, and our techies like to use them on all the platforms they work on, it has been nightmarish trying to keep up with every single software distribution. So they set up something called "cfengine" (I think) and each package gets its own automatic modulefile. This makes it easy to get access to tools like LaTeX if you need them.
... later he adds ...
I found the name of the package we are using here to install 100's of ports for various platforms:
- MPKG
- http://staff.e.kth.se/mpkg
It is already integrated with Environment Modules!
Others have posted various ways to do this, but I'd like to point out that they are all re-inventing the wheel.
A method to modify environment variables cleanly was developed over 10 years ago. It is called Environment Modules. It compiles under Linux. It happens to be the method Cray has used for the last 7 years to modify paths for different versions of its compilers and libraries.
You can even get the latest version via anonymous CVS from sourceforge.
See http://modules.sourceforge.net for more details.
Here's an example of how it works.
In your startup file, (I use tcsh) you put a line like
source /opt/Modules/default/init/tcsh
In a directory filled with "modulefiles", one modulefile named "myghost" might contain some commands like
setenv GS_LIB /local/path/to/my/ghostscript/lib prepend-path PATH /local/path/to/my/ghostscript/bin prepend-path PATH /local/path/to/my/ghostcript/man
To access your local ghostscript stuff, you could say
module load myghost
and the environment variables are modified as you would expect them to be.
To remove all trace of your changes, you do
module unload myghost
and all is as it was before.
The Environment Modules package has been banged on in a variety of production settings at SUN (where it was initially developed), SGI, IBM, HP, etc., so it is fairly robust.
There is also a mailing list (majordomo), with extremely low traffic, mostly just announcements:
modules-interest@eng.auburn.edu
There are probably other packages to do the same things as Environment Modules, but I doubt that they have as much infiltration into the corporate infrastructure .
Good luck, Ted
gpg fingerprint = 6171 14B3 A323 965B 614D 056F B41C 03AE E404 986C
... Iron also asked Ted ...
[Iron] How do you set your From: address on a per-list basis? Do you do something like "edit headers" in mutt and change it manually for each message? That would be tedious. Or do you have an automated way to do it?
[Ted] Read the full header of an email message, and you will usually see an indication of what the MUA is.
I use Gnus, an extraordinarily powerful email package within Emacs. Of course, I also use the anon CVS version, so I sometimes have a few bugs to deal with . But you can just use the version of Gnus that comes with Emacs if you like.
In my .gnus file, I have a setting as follows:
(setq gnus-posting-styles '( ("^nnfolder.*:lists.gnus" (From "Ted Stern <stern+gnus@cray.com>")) ("^nnfolder.*:lists.fortran" (From "Ted Stern <stern+fortran@cray.com>")) ("^nnfolder.*:lists.linux" (From "Ted Stern <stern+linux@cray.com>")) ))
Gnus treats mail like news, so I read folders of mail as if they were groups. Within certain of my groups, the setting above adds the extra "From:" header.
Answer By Edgar Howell
Linux ready for the desktop? -- SuSE seems to think so.
On 13 April I installed SuSE Linux 8.0 (2.4.18-4GB) on a notebook. Ignoring one glitch (a pcmcia module, but notebooks are notorious for difficult installs) and my disinclination towards gui-anything, it was the easiest installation of an operating system I have ever experienced -- other than Coherent and DOS.
Not having a PC available with sufficient resources for recent releases of Linux, the now 2-year-old Toshiba Satellite 2180 CDT became the target. In theory all data on it was backed up to the PC but "just in case" /home and a bit more got tar'd and copied to the PC "for a while". So it wasn't an update but a clean install.
Probably I installed at least 4 times. But then 2 is normal: the first time around suprises don't always get proper responses, the second time is for real. However, there was something about the pcmcia module that hung the install as the system was coming up for the first time. No disk activity but the fan's coming on said the poor AMD was sweating heavily.
Once I believed that -- and by then I had learned that the default office install includes Star Office (which I used to like but would rather replace since it shows its origins too much) -- I chose the standard install without office stuff and before turning it loose removed the pcmcia module from the list of packages to install. After that it was like ho, hum...
The following is my protocol of installation, prompts indented (if the terminology differs from what SuSE actually uses stateside, that's due to my translation from German):
boot CD 1 - menu Installation Language German menu - new/update/start new installation installation settings accept start installation? yes-install root password xxx,ppp add new user yyy,ppp monitor LCD SVGA 800x600@60HZ CRT settings graphic (settings OK) network interfaces and modems not detected next command line login root,ppp shutdown -h now
This took barely 24 minutes, most of which involved installing software. And I have omitted what was done to avoid installing the troublesome pcmcia module (which wouldn't be necessary on a PC).
What really blew me away is that under the monitor options "LCD" was right there and as model one could choose "SVGA 800x600@60HZ"! Yeah, I still checked with sax. The horizontal and vertical frequencies were right. Afterwards I spent several hours playing with the notebook. It even powers off when you shutdown!
Of course it was also neat that the partitions were recognized correctly (yeah, I know, a "clean install", but I've always used Partition Magic) and when all was said and done Win98 was still there, although there would have been no tears shed. Interesting was what can only be described as a gui-LILO: boot and you get about 5 or 10 seconds to make a choice on a graphics screen.
I'm not unbiased. I've been with SuSE since their 5.1. This was the first time using yast2, the graphic install, since they no longer have yast1. I wasn't aware of any possibility of driving yast1 with a script but would have much prefered that, to make it easy to do an identical install on several machines. But then my past includes IBM sysgens with decks of cards. What irritates me about gui-installs is the infinity of questions that need to be answered -- every single time. At least until this SuSE release.
Well, on a PC with adequate resources the yast2 install should go really slick. And like it or not that really is the yardstick nowadays and should go well with the desktop crowd.
Until now I have felt that even frustrated Windows users should stick with what they know unless they are seriously interested in how real operating systems function. In my opinion this release definitely is ready for prime time.
Contents: |
Submitters, send your News Bytes items in PLAIN TEXT format. Other formats may be rejected without reading. You have been warned! A one- or two-paragraph summary plus URL gets you a better announcement than an entire press release. Submit items to gazette@linuxgazette.net
The May issue of Linux Journal is on newsstands now. This issue focuses on kernel internals. Click here to view the table of contents, or here to subscribe.
All articles through December 2001 are available for
public reading at
http://www.linuxjournal.com/magazine.php.
Recent articles are available on-line for subscribers only at
http://interactive.linuxjournal.com/.
The following articles are in the May/June issue of the E-zine
LinuxFocus:
An interview at Linux Journal about the Linux movement and Linux Users Groups in India.
Also at Linux Journal, Linux WiFi Router brings in Subscribers for Ghana's Largest ISP.
Slashdot links:
A couple of links which might be of use when considering new hardware purchases are Linux.org's hardware list and The Linux Hardware Database. Slashdot also recently ran a story on hardware manufacturers that actively support Linux.
Some links from Linux Weekly News
Some links from Slashdot:
Some links from the O'Reilly stable of websites:
Some interesting stories from the The Register:
Linux Today have highlighted several interesting links over the past month:
Listings courtesy Linux Journal. See LJ's Events page for the latest goings-on.
Networld + Interop (Key3Media) | May 7-9, 2002 Las Vegas, NV http://www.key3media.com/ |
IBM developerWorks Live! | May 7-10, 2002 San Francisco, CA http://www-3.ibm.com/events/ibmdeveloperworkslive/index.html |
Strictly e-Business Solutions Expo (Cygnus Expositions) | May 8-9, 2002 Minneapolis, MN http://www.strictlyebusiness.net/strictlyebusiness/index.po? |
O'Reilly Emerging Technology Conference (O'Reilly) | May 13-16, 2002 Santa Clara, CA http://conferences.oreillynet.com/etcon2002/ |
Embedded Systems Conference (CMP) | June 3-6, 2002 Chicago, IL http://www.esconline.com/chicago/ |
USENIX Annual (USENIX) | June 9-14, 2002 Monterey, CA http://www.usenix.org/events/usenix02/ |
PC Expo (CMP) | June 25-27, 2002 New York, NY http://www.techxny.com/ |
O'Reilly Open Source Convention (O'Reilly) | July 22-26, 2002 San Diego, CA http://conferences.oreilly.com/ |
USENIX Securty Symposium (USENIX) | August 5-9, 2002 San Francisco, CA http://www.usenix.org/events/sec02/ |
LinuxWorld Conference & Expo (IDG) | August 12-15, 2002 San Francisco, CA http://www.linuxworldexpo.com |
LinuxWorld Conference & Expo Australia (IDG) | August 14 - 16, 2002 Australia http://www.idgexpoasia.com/ |
Communications Design Conference (CMP) | September 23-26, 2002 San Jose, California http://www.commdesignconference.com/ |
Software Development Conference & Expo, East (CMP) | November 18-22, 2002 Boston, MA http://www.sdexpo.com/ |
Lindows is not only in legal wrangles with Microsoft, but has now run foul of the Free Software Foundation. It would appear that Lindows has been somewhat casual about distributing source code for their products. Bruce Perens has written an open letter to Michael Robertson (Lindows CEO) calling on the company to be honest partners in the free software endeavor. Mono Linux has published a report and analysis of Lindows, available in two parts ( one and two).
David Ranch has announced the release of the IP Masquerade HOWTO.
Recent changes include:
Compaq Computer Corporation have announced a three-year, $20 million agreement with RackShack, the hosting services arm of of Everyones Internet. Compaq will equip RackShack's IT data centers with industry-standard Compaq ProLiant servers for a tier one, Linux-based Web hosting solution.
Bdale Garbee, an Engineer/Scientist in the Linux Systems Operation group for Hewlett-Packard, has been elected Debian project leader.
Debian Weekly News recently reported that Nathan Hawkins has announced a new base tarball for those who would like to see Debian GNU/FreeBSD live. The status of this port is available here.
Linux Planet have recently reviewed Gentoo Linux, a source based distribution aimed at people comfortable with software development (among others).
Gentoo can also be installed on the PPC platform, and has been reviewed by iMacLinux (link courtesy Linux Today).
Linux and Main have an interview with Bart Decrem, co-founder of Eazel (producers of the Nautilus graphical shell for GNOME) and vice president of Hancom Linux. Decrem discusses software in Korea, why companies and governments outside the US don't want to become too dependent on Microsoft, and more. Also featured on Slashdot. While on the subject of Hancom Linux, Linux and Main also reported that Hancom Linux is shipping what is believed to be the first Arab-language Linux distribution. As reported by OSNews, Hancom have now completely focused on the Linux platform for their Hancom Office productivity suite.
Linux Today have the story that SOT, publisher of Best Linux, has announced a change of name for its Linux distribution to coincide with the release of a new version of the distro. In future it will be known as SOT Linux.
SuSE Linux and IBM have announced a broad services alliance that will enable both companies to jointly provide Linux support and services to corporate customers around the world. In the agreement, IBM Global Services and SuSE will collaborate on support and professional services. IBM will package and support turnkey implementations of SuSE Linux Enterprise Server, backed by SuSE's expert development, maintenance, and support teams. In addition to this complete services offering, the two organizations will also collaborate on customer engagements and supplement each other's skills to provide a formidable Linux services delivery capability for corporate customers.
Slashdot ran the story that SuSE 8.0 has shipped, and now includes KDE 3.0, kernel 2.4.18, and various other upgrades/enhancements.
Mammoth PostgreSQL from Command Prompt, Inc. is an SQL-compatible Object Relational Database Management System (ORDBMS). It is designed to give small to medium size businesses the power, performance, and open-standard support they desire. 100% compatible with the PostgreSQL 7.2.1 release, Mammoth PostgreSQL provides a commercially-supported PostgreSQL distribution for Solaris, MacOS X and Red Hat Linux x86 platforms. Mammoth PostgreSQL ships with built-in support for SSL connectivity (Native and ODBC), as well as programming APIs for C/C++, Perl, and Python. There are one-time and subscription-based licensing models available for immediate purchase.
Command Prompt, Inc., provides support, custom programming, and services for PostgreSQL. Service contracts, as well as time and materials support are available, allowing for single-point accountability for a customer's database solution.
Etnus, a supplier of debuggers for complex code, have announced record-breaking sales of its TotalView debugger on Intel Linux platforms, linking the sales to increased development of complex and mission critical codes on Linux systems. Both sales volume and number of licenses sold for the Etnus TotalView debugger on Intel Linux platforms doubled over first quarter 2001 and, for the first time, Etnus reported that Linux was the top-selling platform. Etnus TotalView is a cross-platform, state of the art debugger supporting C/C++ and Fortran.
Etnus believes Linux will continue to be a leader among the many platforms they support and will continue to expand functionality there. The next release of TotalView will add support for GCC 3.X and the Intel compilers for Linux.
CylantSecure is an intrusion detection system for Linux and other Unix variants that stops attacks before they occur by monitoring the behavior of the operating system. It has been developed and produced by Cylant, a division of Software Systems International. By adding instrumentation to the kernel, Cylant is enabled to benchmark server behavior patterns and detect changes in those patterns during operation. If an abnormal behavior occurs, it can be stopped in real time, preventing attacks before they are executed.
This technique is based on the principle that most attacks change the behavior of the software being exploited in a measurable way. CylantSecure uses sensors to monitor the behavior of the software, along with a statistical analysis engine to identify any abnormalities in the behavior. Through continuous behavioral monitoring, CylantSecure can send administrators early warning of attacks, so appropriate measures can be taken. Such measures might include shutting down the program, shunning traffic from the attacking IP or performing system state analysis.
Get more information on the Cylant website.
Opera Software ASA have released Opera 6.0 for Linux Beta 2 with improved features and looks to increase the speed and enjoyment of Linux users worldwide. The earlier version of Opera for Linux, Opera 5, has reached a milestone of one million successful downloads and installations.
For a complete changelog of Opera 6.0 for Linux Beta 2, please visit http://www.opera.com/linux/changelog/
McObject has released version 2.0 of its eXtremeDB small footprint, main memory database on Linux, with new features to improve developer flexibility and enhance the run-time capabilities of applications based on eXtremeDB. McObject built eXtremeDB from scratch to meet the CPU and RAM constraints of intelligent, connected devices while offering dramatic performance improvements over traditional disk-based database systems. Enhancements in version 2.0 include:
An evaluation version of eXtremeDB 2.0 is available from www.mcobject.com/download for free download.
Mozilla 1.0 release candidate 1 has been released. This is a trial run for the upcoming 1.0 release, and is a good indicator of how close that day is. Indeed, Mozilla even managed to attract the attention of Time Magazine, which reported on the possibility that a Mozilla release could break the browser war armistice.
Arkeia Corporation has released a new Arkeia 5 Beta version. Arkeia Version 5 will be the successor of Version 4.x, a high performance, multiple-platform backup software with 90,000 worldwide users. Arkeia 5, will feature a completely rewritten program architecture and will include an assortment of new features requested by users.
Apache 2.0 is now, officially, stable.
Galeon 1.2.1 has been released
AbiWord 1.0 is out
The new version of Mailman, (version 2.0.10) is now available.
[ ** This edition is dedicated to a very dear friend of mine called Natalie Wakelin, who I am indebted to for helping me recently. She has been an absolute star and true friend to me, and although she may not understand a word this "technical" document may have to offer -- I dedicate it to her all the same. Thanks Natalie!! :-) ** ]
Yes, yes, I know. You can stop clapping and applauding. I'm back :-) Seriously, I can only apologise for the "holiday" that the LWM has taken over the past "couple" of months. I have taken rather a large leap into the world of freedom and University life, and I found it more difficult to adjust to than I had originally anticipated!!
But that is by the by.....
For the keen eyed among you, the quote at the top of this column rather sums up the userability of Linux overall. Indeed, no matter how strange a problem may appear to be within Linux, it is not beyong the realm of possibility that it cannot be solved by using Linux. I have been finding that out for myself quite a lot recently :-)
Aside from all the University work, I have been actively helping out with problems at the Hants LUG, both in person and via their mailing list. Actually it has been quite exciting. I have also learn a lot!!
Well that is enough preamble for one month. Enjoy this issue, won't you?
Those of you who read the September edition will remember that I wrote an article about the use of Apache. I had some nice feedback on that (thanks to all who sent their comments). I thought it a nice idea to do a tutorial on squid.
For those of you who don't know, Squid (other than being a sea creature) is a Linux internet proxy program. Why is it called squid? Apparently because (quote: "all the good names were taken")
Squid, works by channelling internet requests through a machine (called a proxy server).
Furthermore, squid offers the ability to filter certain webpages, to either allow or disallow viewing. The ability to do this is through ACLs (Access Control Lists). More on these later.
Installing squid should be straight forward enough. Squid is supplied with all major distributions (RedHat, SuSE, Caldera, Debian, etc) so it should be easily accessible from your distribition CD's.
For those of you that have a Linux distribution that supports the RPM format, you can check to see if you already have it installed, by using the following command:
rpm -qa | grep -i squid
If it is installed, then you should find that "squid2-2.2.STABLE5-190" (or similar) is returned. If you get no responce then install squid from your distibution CD.
If squid is not on your distribution CD, or you are using a version of Linux (such as Debian and Slackware) that does not support the RPM format, then download the source in .tgz (tar.gz) format from http://www.squid-cache.org/download.
To install Squid from its sources copy the tar ball to "/tmp" and then issue the following commands:
1. If you are not user "root", su, or log in as root 2. cd /tmp 3. tar xzvf ./name_of_squid.tar.gz -- [or possibly .tgz] 4. Now run: ./configure 5. After which, you should have no errors. Then you can simply type: make && make install to compile and install the files.
Typically. from a standard RPM installation, these directories will be used:
/usr/bin /etc /etc/squid (possibly -- used to be under RH 5.0) /var/squid/log/ [/usr/local/etc] <-- perhaps symlinked to "/etc"
If you're compiling it from source, then a lot of the files will end up in:
/etc /etc/squid (possibly -- used to be under RH 5.0) /usr/local/bin /var [/usr/local/etc] <-- perhaps symlinked to "/etc"
Suffice to say, it does not really matter, but unless you specifically have requested otherwise, this is where the files will end up.
Now that you have squid installed, let us move onto the next section.... configuration
So, you've installed squid, and are wondering...."Is that it?" ha -- if only it were true, gentle reader. Nope....there are lots of things still to do before we can have ourselves a good old proxy server.
Our efforts now shall be concentrated on one file /etc/squid.conf. It is this file which holds all the settings for squid. Because we will be editing this file, I always find it a good idea, to keep a copy of the original file. So, I think it would be a good idea, if you all issued the command:
cp /etc/squid.conf /etc/squid.conf.orig
And then fire up your favourite editor, and lets begin editing squid.conf
Actually trying to use this file to run squid "out of the box" is impossible. There are a number of things that you'll have to configure before you can have an up-and-running proxy server. At first glance, this file is about a mile long, but the developers have been helpful, since the majority of the file consists of comments about each option that is available.
The first thing, is to tell squid the IP address of the machine it is operating on and at which port it is to listen to. In squid.conf, you should find a commented line which looks like:
#http_port 3128
Uncomment this line, by deleting the leading hash (#) symbol. Now by default, the port number 3128 is chosen. However, should you wish to tell squid to listen on a different port, then change it!! Thus on my proxy machine, I have specified:
http_port 10.1.100.1:8080
Which binds squid to listen on the above IP address with the port 8080. What you have to be careful of, is making sure that no other running application is trying to use the same port (such as apache), which is a very common mistake that a lot of people make.
Now, as we progress through this configuration file, the next major configuration option we should now change is cache_mem. This option tells squid how much memory it should use for things like caching.
I have just uncommented this line -- and left the default at 8 MB
Further on down from this option are some more options which tell squid about the high/low cache "watermark". This is simply a percentage of disk-space, that says that when it gets to within 90/95% then squid should start deleting some of its cached items.
#cache_swap_low 90 #cache_swap_high 95
I have simply uncommented these, but I have changed their values. The reason being, is because I have a 60 GB hard drive, one percent is hundreds of mega bytes, so I have changed the values to:
cache_swap_low 97 cache_swap_high 98
Right....so far so good. We have told squid on which IP and port to listen to, told it how much memory it should use, and told it the percentage of drive space it should reach before it starts deleting its own cached items. Great!! If you haven't do so already, save the file.
The next and penultimate option that I changed was quite an important one, since this one determines the location and size of the cache directories. There is a TAG, which looks like:
cache_dir /var/squid/cache 100 16 256
What this says is that for the path "/var/squid/cache"each top-level directory will hold 100MB. There will be 16 top-level directories and below that there will be 256 sub-directories
The last major item that I shall be tweaking in this file, before moving on to filtering, is the use of access logs. Just below the option we have just configured for the cache_dir, are options to allow logging. Typically you have the option of logging the following:
Each of the above logs have their own advantage / disadvantage in the running of your proxy server. Typically, the only logs that I keep are the access logs and the cache log. The reason being simply because the store and swap logs don't interest me :-).
It is the access log file which logs all the requests that users make (i.e. to which website a particular user is going to). While I was at school, this file was invaluable in determining which user was trying to get to banned sites. I recommend all sysadmins that have or are going to set-up an internet proxy server to enable this feature -- it is very useful.
So, I did the following (uncommenting the TAGS):
cache_access_log /var/squid/logs/access.log cache_log /var/squid/logs/cache.log
I recommend that you leave the log names as they are.
Obviously, I have only covered the most basic options within the squid.conf file. There are a whole mass of options for particular situations. Each option is fairly well commented, so should you wish to see what a particular option does, it should not be too hard.
This section is still using "/etc/squid.conf" but I shall go into the configuration options for access control in a little more detail.
Access control gives the sysadmin a way of controlling which clients can actually connect to the proxy server, be it via an IP address, or port, etc. This can be useful for computers that are in a large network configuration.
Typically ACL's (Access Control Lists) can have the following properties to them:
All access controls have the following format to them:
acl acl_config_name type_of_acl_config values_passed_to_acl
Thus in the configuration file, locate the line:
http_access deny all
And above which, add the following lines
acl weekendmechnetwork 10.1.100.1/255.255.255.0 http_access allow weekendmechnetwork
You can change the acl name of "weekendmechnetwork" to a name of your choice. What this does, is it says that for the acl with the name "weekendmechnetwork", use the specified IP address 10.1.100.1 (the proxy server), with a netmask of 255.255.255.0 Thus, "weekendmechnetwork" is the name assigned to the clients on the network.
The line "http_access allow weekendmechnetwork" says that the rule is valid, and so can be parsed by squid itself.
The next thing that we shall do, is look at allowing selected clients to access the internet. This is useful for networks where not all of the machines should connect to the internet.
Below what we have already added, we can specify something like:
acl valid_clients src 192.168.1.2 192.168.1.3 192.168.1.4 http_access allow valid_clients http_access deny !valid_clients
What this says is that for the ACL name "valid_clients" with the src IP addresses listed, allow http access to "valid_clients" (http_access allow valid_clients), and disallow anyother IP's which are not listed (http_access deny !valid_clients).
If you wanted to allow every machine Internet access, then you can specify:
http_access allow all
But, we can extend the ACL's further, by telling squid that certain ACL's are only active at certain times, for example:
1. acl clientA src 192.168.1.1 2. acl clientB src 192.168.1.2 3. acl clientC src 192.168.1.3 4. acl morning time 08:00-12:00 5. acl lunch time 12:30-13:30 6. acl evening time 15:00-21:00 7. http_access allow clientA morning 8. http_access allow clientB evening 9. http_access allow clientA lunch 10. http_access allow clientC evening 11. http_access deny all[ ** N.B. Omit the line numbers when entering the above, I've added them here to make explaination easier -- Thomas Adam ** ]
Lines 1-3 set-up the ACL names which identify the machines.
Lines 4-6 set-up ACL names for the specified time limits (24-hour format).
Line 7 says to allow clientA (and only clientA) access during "morning"
hours.
Line 8 says to allow clientB (and only clientB) access during "evening"
hours.
Line 9 says to allow clientA (and only clientA) access during "lunch"
hours.
Line 10 says to allow clientC (and only clientC) access during "evening"
hours.
Line 11 then says that if any other client attempts to connect -- disallow it.
But we can also take the uses of ACL's further, by telling Squid to match certain regexes in the URL expression, and in effect throw the request in the bin (or more accurately -- "&>/dev/null" :-)
To do this, we can specify a new ACL name that will hold a particular pattern. For example
1. acl naughty_sites url_regex -i sex 2. http_access deny naughty_sites 3. http_access allow valid_clients 4. http-access deny all[ ** Remember -- don't use the line numbers above!! ** ]
Line 1 says that the word "sex" is associated with the ACL name "
naughty_sites" the clause url_regex says that the ACL is of that type -- i.e.
it is to check the words contained within the URL. The -i says that it is to
ignore case-sensitivity.
Line 2 says to deny all clients access to the website that contains anything from
the ACL "naughty_sites"
Line 3 says to allow access from "valid_clients".
Line 4 says to deny any other requests.
So,I suppose you are now wondering...."how do I specify more than one regex?". Well, the answer is simple....you can put them in a separate file. For example, suppose you wanted to filter the following words, and dis-allow access to them, if they appeared in the URL:
sex porn teen
You can add them to a file (one word at a time), say in:
/etc/squid/bad_words.regex
Then, in "/etc/squid.conf" you can specify:
acl bad-sites url_regex -i "/etc/squid/bad_words.regex" http_access deny bad_sites http_access allow valid_clients http-access deny all
Which probably makes life easier!! :-). That means that you can add words to the list whenever you need to.
There is also a much more easier way of filtering both regexes and domain names, by using a program called SquidGuard. More about that later.....
Now we come to the most important part -- actully running squid. Unfortunately, if this is the first ever time that you'll be initialising squid, then there are a few options that you must pass to it.
Typically, the most common options that can be passed to squid, can be summed up in the following table.
Flag | Explanation |
---|---|
-z | This creates the swap directories that squid needs. This should only ever be used when running squid for the first time, or if your cache directories get deleted. |
-f | This options allows you to specify an alternative file to use, rather than the default "/etc/squid/conf". However, this option should be rarily used. |
-k reconfigure | This option tells squid to re-load its configuration file, without stopping the squid daemon itself. |
-k rotate | This option tells squid to rotate its logs, and start new ones. This option is useful in a cron job. |
-k shutdown | Stops the execution of Squid. |
-k check | Checks to ensure that the squid deamon is up and running. |
-k parse | Same as "-k reconfigure". |
The full listing however for the available options are as follows:
Usage: squid [-dhsvzCDFNRVYX] [-f config-file] [-[au] port] [-k signal] -a port Specify HTTP port number (default: 3128). -d level Write debugging to stderr also. -f file Use given config-file instead of /etc/squid/squid.conf -h Print help message. -k reconfigure|rotate|shutdown|interrupt|kill|debug|check|parse Parse configuration file, then send signal to running copy (except -k parse) and exit. -s Enable logging to syslog. -u port Specify ICP port number (default: 3130), disable with 0. -v Print version. -z Create swap directories -C Do not catch fatal signals. -D Disable initial DNS tests. -F Foreground fast store rebuild. -N No daemon mode. -R Do not set REUSEADDR on port. -V Virtual host httpd-accelerator. -X Force full debugging. -Y Only return UDP_HIT or UDP_MISS_NOFETCH during fast reload.
If you are running squid for the first time, then log in as user "root" and type in the following:
squid -z
This will create the cache.
Then you can issue the command:
squid
And that's it -- you have yourself a running proxy server. Well done!!
SquidGuard is an external "redirect program" whereby squid actually forwards the requests sent to itself to the external SquidGuard daemon. SquidGuard's job is to allow a greater control of filtering than Squid itself does.
Although, it should be pointed out that to carry out filtering, the use of SquidGuard is not necessary for simple filters.
SquidGuard is available from (funnily enough) http://www.squidguard.org/download. This site is very informative and has lots of useful information about how to configure SquidGuard.
As per Squid, SquidGuard is available in both rpm and .tgz format.
If your distribution supports the RPM format then you can install it in the following way:
su - -c "rpm -i ./SquidGuard-1.2.1.noarch.rpm"
Should your distribution not support the RPM format, then you can download the sources and compile it, in the following manner:
tar xzvf ./SquidGuard-1.2.1.tgz ./configure make && make install
The files should be installed in "/usr/local/squidguard/"
Before we can actually start tweaking the main "/etc/squidguard.conf", we must first make one small change to our old friend "/etc/squid.conf". In the file, locate the TAG:
#redirect_program none
Uncomment it, and replace the the word "none" for the path to the main SquidGuard file. If you don't know where the main file is, then you can issue the command:
whereis squidGuard
And then enter the appropriate path and filename. Thus, it should now look like:
redirect_program /usr/local/bin/squidGuard
Save the file, and then type in the following:
squid -k reconfigure
Which will re-load the configuration file.
Ok, now the fun begins. Having told squid that we will be using a redirect program to filter requests sent to it, we must now define rules to match that.
SquidGuard's main configuration file is "/etc/squidguard". Out of the box, this file looks like the following:
-------------------
logdir /var/squidGuard/logs dbhome /var/squidGuard/db src grownups { ip 10.0.0.0/24 # range 10.0.0.0 - 10.0.0.255 # AND user foo bar # ident foo or bar } src kids { ip 10.0.0.0/22 # range 10.0.0.0 - 10.0.3.255 } dest blacklist { domainlist blacklist/domains urllist blacklist/urls } acl { grownups { pass all } kids { pass !blacklist all } default { pass none redirect http://localhost/cgi/blocked?clientaddr=%a&clientname=%n&clientuser=%i&clientgroup=%s&targetgroup=%t&url=%u } }
-------------------
What I shall do, is take the config file in sections, and explain what each part of it does.
logdir /var/squidGuard/logs dbhome /var/squidGuard/db
The first line sets up the directory where the logfile will appear, and creates it if it does not exist.
The second line sets up the directory where the database(s) of banned sites, expressions, etc, are stored.
src grownups { ip 10.0.0.0/24 # range 10.0.0.0 - 10.0.0.255 # AND user foo bar # ident foo or bar }
The above block of code, sets up a number of things. Firstly, the src "grownups" is defined by specifying an IP address range, and saying which users are a member of this block. For convenience sake, the generic terms "foo" and "bar" are used here as an example.
It should also be pointed out that the user TAG can only be used if an ident server is running on the server that forwards the request onto the squid proxy server, otherwise it will be void.
src kids { ip 10.0.0.0/22 # range 10.0.0.0 - 10.0.3.255 }
This section of statements sets up another block, this time called "kids" which is determined by a range of IP addresses, but no users.
You can think of grownups and kids as being ACL names similar to those found in "/etc/squid.conf".
dest blacklist { domainlist blacklist/domains urllist blacklist/urls expression blacklist/expressions }
This section of code is significant since it defines a dest list to specific filtering processes. By processes, there are three main ways that SquidGuard applies its filtering process:
1. domainlist -- lists domains, and only those, one line at a time, for example:
nasa.gov.org squid-cache.org cam.ac.uk
2. urllist -- actually specifying specific webpages (and omitting the "www.", e.g.
linuxgazette.com/current cam.ac.uk/~users
3. expression -- regex words that should be banned within the URL, thus:
sex busty porn
The last block of code:-
acl { grownups { pass all } kids { pass !blacklist all } default { pass none redirect http://localhost/cgi/blocked?clientaddr=%a&clientname=%n&clientuser=%i&clientgroup=%s&targetgroup=%t&url=%u } }
Says that for the acl block, and for the "grownups" section, pass all the requests to it -- i.e. allow those URL's / expressions, etc, that are contained witin the dest blacklists.
Then, it says that for the "kids" section, pass all requests, except those contained within the dest blacklists. At which point, if a URL is matched from the dest blacklists, it is then forwarded, to the default section.
The default section says that if requests are found not to come from either " grownups" or "kids" then it won't allow access to the website, and will redirect you to another webpage, which is most likely an error page.
The variables passed with this redirect statement, specify the type of request, etc, which can then be processed by a cgi-script to produce a custom error message, for example.
It should be pointed out that in order for filtering to take place, then the following piece of code should be present:
default { pass none }
Either with or without the redirect clause.
There are more advanced configuration options that can be used within this file. Examples can be found out at http://www.squidguard.org/configuration.
Thus completes the tutorial for both Squid and SquidGuard. Further information can be found at the all of the URL's embedded in this document, and at my website, which is at the following address:
www.squidproxyapps.org.ukOK, ok, I know you're all thinking: "Not another backup script". Well, there has been some talk of this on TAG (The Answer Gang) mailing list recently so, I thought, I'd jump on the band-wagon.....
This script is really quite simple -- it uses a configuration file (plain text) which lists all of the files (and directories) that you want backed up, and then puts them in a gzipped tarball, in a specified location.
Those of you who are familiar with BASH shell scripting, might find this a little rumedial, however, I hope that my in-line comments will aid those who are still trying to learn the shell
-------------------
#!/bin/bash ################################################# #Keyfiles - tar/gzip configuration files # #Version: Version 1.0 (first draft) # #Ackn: based on an idea from Dave Turnbull # #Authour: Thomas Adam # #Date: Monday 28 May 2001, 16:05pm BST # #Website: www.squidproxyapps.org.uk # #Contact: thomas@squidproxyapps.org.uk # ################################################# #Comments herein are for the benefit of Dave Turnbull :-). #Declare Variables configfile="/etc/keyfiles.conf" tmpdir="/tmp" wrkdir="/var/log/keyfiles" tarfile=keyfiles-$(date +%d%m%Y).tgz method=$1 #options passed to "keyfiles" submethod=$2 #options supplied along with "$1" quiet=0 #Turns on verbosity (default) cmd=`basename $0` #strip path from filename. optfiles="Usage: $cmd [--default (--quiet)] [--listconffiles] [--restore (--quiet)] [--editconf] [--delold] [--version]" version="keyfiles: Created by Thomas Adam, Version 1.0 (Tuesday 5 June 2001, 23:42)" #handle error checking... if [ ! -e $configfile ]; then for beepthatbell in 1 2 3 4 5; do echo -en "\x07" mail -s "[Keyfiles]: $configfile not found" $USER done fi #Make sure we have a working directory [ ! -d $wrkdir ] && mkdir $wrkdir #Parse options sent via command-line if [ -z $method ]; then echo $optfiles exit 0 fi #Check command line syntax check_syntax () { case $method in --default) cmd_default ;; --listconffiles) cmd_listconffiles ;; --restore) shift 1 cmd_restore ;; --editconf) exec $EDITOR $configfile exit 0 ;; --delold) cd $wrkdir && rm -f ./*.old > /dev/null exit 0 ;; --version) echo $version exit 0 ;; --*|-*|*) echo $optfiles exit 0 ;; esac } #Now the work begins..... #declare function to use "--default" settings cmd_default () { #tar/gz all files contained within $configfile if [ $submethod ]; then tar -cZPpsf $tmp/$tarfile $(cat $configfile) &>/dev/null 2>&1 else tar -vcZPpsf $tmp/$tarfile $(cat $configfile) fi #If the contents of the directory is empty...... if test $(ls -1 $wrkdir | grep -c -) = "0"; then mv $tmp/$tarfile $wrkdir exit 0 fi for i in $(ls $wrkdir/*.tgz); do mv $i $i.old done mv $tmp/$tarfile $wrkdir } #List files contained within $configfile cmd_listconffiles () { sort -o $configfile $configfile cat $configfile exit 0 } #Restore files...... cmd_restore () { cp $wrkdir/keyfiles*.tgz / cd / #Check for quiet flag :-) if [ $submethod ]; then tar vzxfmp keyfiles*.tgz &>/dev/null 2>&1 rm -f /keyfiles*.tgz exit 0 else tar vzxfmp keyfiles*.tgz rm -f /keyfiles*.tgz exit 0 fi } #call the main function check_syntax
-------------------
Suffice to say, the main changes that you might have to make, are to the following variables:
configfile="/etc/keyfiles.conf" tmpdir="/tmp" wrkdir="/var/log/keyfiles"
However, my script is sufficiently intelligent, to check for the presence of $wrkdir, and if it doesn't exist -- create it.
You will also have to make sure that you set the appropriate permissions, thus:
chmod 700 /usr/local/bin/keyfiles
The most important file, is the script's configuration file, which, for me, looks like the following:
-------------------
/etc/keyfiles.conf /etc/rc.config /home/*/.AnotherLevel/* /home/*/.fvwm2rc.m4 /home/solent/ada/* /root/.AnotherLevel/* /root/.fvwm2rc.m4 /usr/bin/header.sed /usr/bin/loop4mail /var/spool/mail/*
-------------------
Since this file, is passed to the main tar program, then the use of wildcards is valid, as in the above file.
It should be pointed out that each time the script runs, the last backup file created, i.e "keyfiles-DATE.tgz" is renamed to "keyfiles-DATE.tgz.old" before the new file takes its place.
This is so that if you need to restore the backup file at anytime, my script knows which file to use by checking for a ".tgz" extension.
Because of this feature, I have also included a "--delold" option which deletes all the old backup files from the directory.
To use the program, type:
keyfiles --default
Which will start the backup process. If you want to surpress the verbosity, you can add the flag:
keyfiles --default --quiet
The other options that this program takes, are pretty much self-explanatory.
This backup script is by no means perfect, and there are better ones available. Any comments that you have, would be appreciated!!
Way, way, back in the days when the illustrious founder of this special magazine, John Fisk was writing this column, another authour, Larry Ayers used to do a series of program reviews. He mentioned briefly a new program called Nedit, but never reviewed it.
So, I will :-)
I have been using Nedit for about three years now. I do all of my work in it -- when I am in X11 that is. A typical window of Nedit, looks like this screenshot.
This program offers a huge selection of features. Probably the most popular is the syntax highlighting feature, for over a host of languages, many of which are:
If, for some bizare reason, you program in an obscure langauge that is not listed in the above then you can specify your own regex patterns.
Nedit also allows you to do complex search and replace methods by using case-sensitive regex pattern matches.
A typical search / replace dialog box, looks like the following:
Allowing you to form complex searches.
Each of the menus, can be torn-off and remain sticky windows. This can be particularly useful, if you a particular menu over and over, and don't want to keep clicking on it each time.
This program is over-loaded with options, many of which I am sure are useful, but I have not been able to find a use for all of them yet. And as if that was not enough, Nedit allows you to write custom macros so that you can define even more weirder functions.
I recommend this program to everyone, and while I don't want to re-invent the Emacs / Vim argument, I really would consider it a viable alternative to the over-bloated "X11-Emacs" package that eats up far too much memory!! :-)
You can get Nedit from the following:
www.nedit.orgEnjoy it :-)
Well, that concludes it for this month -- I had not expected it to be quite this long!!. My academic year is more or less at a close, and I have exams coming up at the end of May. Then I shall be free over the summer to pursue all my Linux ideas that have been formulating in my brain ( -- that is whats left of it after Ben Okopnik brain washed me) :-)
Oh well, until next month -- take care.
Send Your Comments |
Any comments, suggestions, ideas, etc can be mailed to me by clicking the e-mail address link below:
mailto:thomas_adam16@yahoo.com
Due to popular demand, I created a Slackware geek caricature as well as a Red Flag geek caricature. The Slackware character comes across to me as being the very cool, confident Linux hacker. If you know Slackware, bets are you know Linux inside and out ;-)
The Red Flag geek caricature comes from Asia. Being a Linux distribution developed in China it was pretty clear cut how this fellow was going to look (well to me anyway). Lets hope this distribution continues to grow and place a bit of pressure on MS. I'm sure this particular distro is going to be very popular amongst our asian buddies.
My previous LG cartoons: issue72 issue73 issue76
Important - You can view my other artwork and sketches on my new
website.
Cartoonist Shane is taking a long holiday in Asia and staying at youth
hostels (YHAs). Carol and Tux decided to accompany him....
Recent HelpDex cartoons are
here
at the CORE web site.
...turning right on Future Avenue. This article adds some
historical and architectural perspective on the world of
hypermedia and what motivated its pioneers. The idea of hypermedia predates the
World Wide Web by some forty-five years, so this article starts by describing
their work. No one correct definition of the term hypermedia exists, but the
article will supply a couple of possible definitions derived from the ideas of
the pioneers.
Afterwards, four major steps in the architectural evolution of
actual hypermedia systems are described. When reading that part, keep
in mind how software has generally evolved (away from a centralistic
and toward a more modular design). Not surprisingly, this is also
reflected in the development of hypermedia systems.
In the mid-forties the accumulated knowledge of mankind was growing
rapidly. This made it exceedingly difficult for people to store
and retrieve information in an efficient and intuitive manner. Bush
[1] realized the problem of ``information
overload'' and came up with a visionary solution for storage, organization and
retrieval of information. He devised a mechanical device that would work by
the same principle of associative indexing as the human brain and especially
the human memory. The machine, called the Memex (short for Memory extension),
made Bush a pioneer within a field later to be known as hypertext when dealing
with text, and hypermedia when mixing several kinds of media. Today the terms
hypertext and hypermedia are used interchangeably.
The principle of hypertext is a well known concept in literature. At the
same time as one reads linearly through a text it is often possible to jump to
footnotes, annotations, or references to different materials. Bush imagined
that parts of the text could be touched;
thereby leaving the linear way of reading and be taken directly to the
footnote, the annotation, or to some other material. This way of
reading leans upon a possible definition of hypertext as a paradigm
for managing information [2]. Where physical references can
be difficult, or even impossible, to follow, because the source
referred to is unavailable to the reader, i.e. an article or a book,
with electronic hypertext it becomes possible to gather a corpus of
information and radically change the way a document is read or
accessed. One could take this idea one step further and enable the
reader to add new links between different documents, add comments to
the links, or parts of the document itself.
It was Bush's vision that the Memex would make all these things, as
well as a couple of others, mechanically possible. Nowadays, of
course, what probably come to ones mind when reading the previous
paragraph is the World Wide Web [3] and maybe Bill
Gates' vision in the mid-nineties of ``information at your
fingertips'' [4]. The Memex in contrast
would store information on microfilm within the machine, but the
principle remains the same. The documents stored in the Memex were
to be linked together using associative indexing as opposed to
numerical or alphabetical indexing. Using associative indexing,
accessing data would become more intuitive for the user of the
machine. Another definition of the term hypertext could then be
a way of organizing information associatively [2]. Where
associations in the brain become weaker as a function of time and the
number of times the association is used to retrieve information,
associations between documents in the Memex would retain their
strength over time.
Both previous definitions of the term hypertext are concerned with
navigation or a way of navigating through a corpus of information. The
Memex can thus be thought of as a navigational hypermedia system,
allowing its users to jump between documents adding to the reading
experience. This changed experience could form the basis of yet
another possible definition (or a broader version of the previous one)
of the term hypertext as a non-linear organization of information
[2].
Engelbart read Bush's article in the late-forties, but some fifteen
years had to pass before the technology had reached a sufficient
level of maturity for Engelbart to develop the world's first system
utilizing Bush's concept of hypertext. NLS (oN-Line System) supported
(1) the user in working with ideas, (2) the creation of links between
different documents (in a broad sense), (3)
teleconferencing, (4) text processing, (5) sending and
receiving electronic mail, and finally enabled (6) the user to
configure and program the system. This was something unheard of at that
time. To better and more efficiently make this functionality
available to the user, the system made use of some groundbreaking
technologies for its time.
Among other things Engelbart invented something akin to the
mouse to enable the user to point and click on the screen, and a window
manager to make the user interface appear in a consistent manner. The
hypertext part comprised only a small part of NLS's overall
functionality, whose major focus was on providing a tool for helping a
geographically distributed team to better collaborate. Today, this
kind of software is often referred to as groupware.
The user interface was revolutionary and far ahead of its time for
computer users at all levels. Previously, most programmers
interacted with computers only indirectly through punch cards and output
from a printer. NLS, as a whole, served as a source of inspiration for
systems to come, and inspired Apple in the development of the
graphical user interface in the early eighties.
Like Engelbart, Nelson was inspired by Bush's early article
[1]. But, unlike Bush and Engelbart, Nelson
came from a background in philosophy and sociology. In the early sixties, he
envisioned a computer system that would make it possible for writers to work
together writing, comparing, revising, and finally publishing their work
electronically.
Nelson's Xanadu has never really moved beyond the visionary stage,
although a release of the Xanadu system has been announced on several
occasions. It is hard to define exactly what Xanadu is, as it is not
so much a system in itself, but rather a set of ideas that other
systems may adhere to. The name stems from a poem by British writer
Coleridge, who used the word Xanadu to denote a world [10]
of literary memory where nothing would be forgotten. And indeed, one
of the ideas behind Xanadu was to create a docuverse: a virtual
universe where most of the human knowledge is present. It was also
Nelson who coined the term ``hypertext'' in the mid-sixties, although
his definition was to be understood in the broad sense covering both
hypertext and hypermedia.
Another one of Nelson's ideas was a special way of referencing
other documents (or parts of them), such that a change in the
aggregated document would automatically propagate to the composite
document; copying by reference or creating a virtual copy as Nelson
put it. This way an author may charge money in return for providing
and keeping the authors part of the overall document up to date. To
some extend, this idea resembles that of todays deep links, although
this concept has spawned some controversy on the copyright issue, an
area that Nelson's virtual copy mechanism was to prevent in the first
place. Many of the original ideas from the Xanadu project eventually
managed to find their way into the World Wide Web and other hypermedia
systems.
When describing the architecture of different kinds of hypermedia
systems, three components are always present. The components and their
purposes are briefly described below to better express why the
evolution from monolithic to component-based systems have taken
place. Even the earliest hypermedia systems made use of a classic
three-tier model, with the application layer on top taking care of
presenting information to the user. Below this layer is the link
layer, that makes up the model of the system and takes care of
managing structure and data. It is the associations and the
information needed to represent these associations that is termed
structure. Data, on the other hand, refers to the actual content of a
document. Finally, the storage component takes care of storing
information ranging from just the structure to both structure and
content of the documents, depending on the system.
The development has happened in evolutions where, for each new
generation, some functionality previously part of the core of the
system has been factored out into its own component (Figure
, bounding box represents components that are
part of the core of the hypermedia system). The description of
architectures stems partially from [5].
The dominant architecture among early systems was the monolithic one
(Figure , on the left). All three layers were
contained within one logical process, although this division was
invisible to the user. A monolithic system is considered a closed
system in that it neither publishes an application programming
interface (API) or a protocol for describing how structure and data
are to be stored. This made it pretty much impossible for other
systems to communicate and exchange data with the monolithic
system. Even basic functionality, such as editing information stored
in the system was managed by internal applications, only supporting a
few data format. So, before one could work on existing data they had
to be imported. This made it impossible to, say, directly store a
document created in a word processor in the monolithic system. At
least, not before the content of the document had been copied into the
internal editor and saved.
The file formats supported by the systems were limited to what the
developers found useful. If you were to import the contents of a
document created in a word processor, special formatting (part of the
text made bold, or a change in the choice of font etc.) would be
discarded. This puts the user in a dilemma: If hypertext functionality
was to be fully utilized, it happened on the expense of abandoning
ones powerful and familiar application environment in return for using
internal applications of a hypermedia system. A far from ideal
solution, because designers of hypermedia systems are specialists in
developing hypermedia software, not word processing or other kinds of
software.
Along with the import problem came a related problem: The system is
limited in the number of data formats it can create associations
between. Both documents, or ends, of the association have to reside
within the system boundary; that is, stored within the monolithic
system. Export of data from the system was also far from
straightforward, because the systems made use of their own internal
format for storage; a format rarely supported by contemporary
hypermedia systems, causing information to be lost during the export
process as well.
Despite these disadvantages, monolithic systems were widely used in
the eighties. Maybe they owe a part of their success to the fact that
other applications used in that period were not too keen on exchanging
data and communicating with each other neither. Examples of monolithic
hypermedia systems are KMS [2,6], Intermedia
[7], Notecards [8], and to some extend the
Microsoft Winhelp system used to generate Windows help
files. Although, strictly speaking, the Microsoft Winhelp system and a
number of other help systems have a different primary use than
traditional hypermedia systems, they nevertheless make use of
hypermedia functionality.
The description of monolithic systems revealed a number of
shortcomings. As a solution to some of these problems the user
interface component was moved out of core of the system and into its
own process (Figure , in the middle. With the
shifted rectangles indicating that a number of applications may now
access the hypermedia system). Client/server hypermedia systems come
in two flavors: The link server system (LSS) with its primary focus on
structure; that is the associations between documents, and the
hyperbase management system (HBMS) focusing on structure as well as
content.
From a software point of view the client/server based hypermedia
systems are open in the sense that they publish a protocol and an API
for applications to use. If an existing application was to offer
hypermedia functionality to its users, it would have to make use of
these protocols and API's. In the hypermedia world, however, the
definition of openness differs from the general definition. A
hypermedia system that requires the application to make use of a
specific format for specifying both structure and data is
considered a closed system, even if it publishes protocols and
API's. An open system, on the contrary, is one that only
specifies a format for structure. By not imposing a particular format
on the actual content itself, an open system is able to handle a lot
of different data formats and create associations between types of
data created by various applications outside the hypermedia system.
From the general definition of openness it follows that the HTTP
protocol of the World Wide Web is an open protocol in that it
specifies a number of messages to be exchanged between the client and
the server and the expected responses. However, the structure is
embedded within the HTML document as a number of hrefs and
other tags specifying the structure. The implication of this is that
special applications (browsers) are required for parsing HTML files
looking for hrefs (and other tags). That is why the World
Wide Web is a closed hypermedia system when subjected to the
hypermedia definition of openness, and that is why, in a client/server
system, there can be any number of applications making use of the core
system, with information stored on the server.
Other systems, on the contrary, does not impose a particular format on
the content of the documents. However, they still require the source
code of the application to be modified to make calls to some API. So,
the client/server based systems from the early nineties solved a
number of problems present in the monolithic systems by not making the
application component an integral part of the hypermedia system. An
example of an LSS based system is Sun's Link Service [9],
while the World Wide Web [3] is an exemplification of a
HBMS system, storing documents as part of the system as files in a
file system.
The OHS is a further development of the client/server concept, and
therefore OHS's and client/server systems have a lot of features in
common. Where client/server systems could be classified in terms of
LSS and HBMS, an OHS is typically a descendant of one of these. OHS's
are only made up of the link component (Figure
, on the right), and is therefore often
referred to as middleware in the sense that (1) the component contains
functionality to be used or shared across a range of applications, (2)
it works across different platforms, (3) it may be distributed, and
finally (4) it publishes protocols and API's. An OHS is
distinguishable from a client/server system in that there is no
central storage as storing documents are no longer part of the core of
the system.
Because data is stored separate from structure it is possible to
support associations between just about any data format, i.e. text,
HTML, and graphics etc. When structure associated with a document is
requested by an application, it is send from the link service to the
application and applied to the data. This way a greater number of
applications can interact with the system, as they no longer have to
make use of a specific protocol for storing data, i.e. HTML on the
World Wide Web. Practically speaking, the structural information may
consist of a number of attribute/value-pairs, where the number of
attributes vary depending on the type of data. For an image,
coordinates may be specified, whereas for textual data an offset may
be sufficient.
OHS's solved some of the problems introduced by the monolithic and the
client/server systems, but are far from ideal. Every OHS defines its
own protocols and API's, and not all OHS's support the same
functionality. Descendants of LSS systems typically allow only for
associations to be created between already existing documents, while
descendants of HBMS systems, in addition to the LSS feature mentioned
above, may also include content related functionality such as version
and concurrency control. The result is that (1) an application written
with a specific OHS in mind, will not work with another system, (2)
because of the different protocols and API's, stored information
cannot be shared across different systems, (3) because of the lack of
a common standard specifying a minimal protocol or API, every
system implements its own API, making individual systems unable to
communicate with each other. Furthermore, although quite a few other
domains exist, most OHS's are designed with navigational hypermedia in
mind. An example of an OHS descending from LSS is Microcosm
[12], while an HBMS descendant is Hyperform [11].
Component Based Open Hypermedia Systems (CB-OHS's) are very similar to
``simple'' open hypermedia systems. However, as the name implies,
there is a greater focus on the notion of components. Besides the
component issue, the thing to note here is that this kind of system
supports several kinds of structural domains, and may store its data
at different locations. So, it differs primarily from the OHS in the
link component.
Compared to the OHS's, the first generation CB-OHS's (1G CB-OHS) tried
to solve the problem of lack of cooperation between individual
components by introducing standards. So far there is an agreed upon
standard specifying how the application and the structure service in
the navigational domain should communicate, and further standards are
underway. Another goal of the 1G CB-OHS is that it should be possible
to extend the system to support other domains as well, simply by
adding a new structure service (that is, a new component) to support
the new domain, i.e. the taxonomic or the spatial
domains. Alternatively an existing component could be modified to
handle several domains as was the case with the OHS. Compared to the
CB-OHS, an OHS can be though of as comprised of just one structure
service. However, modifying an existing component this way is not a
very clean and flexible solution. But common to all structure
components is that they access the storage component through the same
API. The implication of this is that a new structure service will
therefore automatically ``inherit'' mechanisms for versioning,
concurrency control or what else the storage component has to offer.
For the 1G systems to meet these goals the structure service makes a
number of protocols and API's available to its clients (the browser or
whatever application that wish to communication with the hypermedia
system. Because the system adheres to the hypermedia definition of
openness it can essentially be any type of application). Figure
shows an architecture with three structural
components, each representing a structural domain. Among other things
a structural domain deals with the special abstractions used,
i.e. node, link, and context within the navigational domain. As
described in the previous section, the special abstractions within
every domain makes it a good candidate for a new component instead of
intermixing the functionality with an existing one.
The structure component communicates with the storage component
(called the hypermedia store), but because the components no longer
exist within a single process boundary some additional work has to go
into the communication process. Local communication can be handled by
some form of Interprocess Communication (IPC) or Local Procedure Call
(LPC), but across a network things get complicated. To support network
communication a lot of work went into the development of custom
component frameworks. This is also the main difference between the
first and the second generation of CB-OHS's. Where the 1G CB-OHS made
use of custom frameworks, the 2G CB-OHS makes use of general
frameworks like COM or CORBA. The developer can then focus on
developing hypermedia functionality and ignore the lower level details
of the communication process. The problem with integrating existing
application still exist though, because modifying an existing
application to make use a component framework is generally a
non-trivial task.
The definition of standards, such as the one between the structure
component and the application, is a result of the work of the Open
Hypermedia Systems Working Group (OHSWG). As standards evolve they
will benefit users at all levels [13]. The end user will come
to think of hypermedia functionality in the same way as with cut,
copy, and paste today [12]; as something that is a natural
ingredient of every application. At some point in the future it
might be possible to add menu items such as ``Start link'' and
``Finish link'' etc. to every application, and implementing them will
be no more difficult than todays cut, copy, and paste. For producers
of content, common standards will also come in handy, as documents and
structures may be reused across platforms and hypermedia system
boundaries. Finally, besides the editing functionality previously
described, the developer will be able to focus on what a standardized
system offers, no matter of the actual system, as long as it adheres
to agreed upon standards.
Hypermedia systems have emerged from a need for organizing an ever
growing pile of information better than by simply storing things
alphabetically. Since Bush described his thoughts of a machine that
functionally resembled the way the human memory works, the knowledge
of mankind has doubled many times, and the World Wide Web has replaced
many of the earlier hypermedia systems and made quite a bit of the
visions of the pioneers come true. However, at the same time, it is
worth noting that the World Wide Web is a very simple system compared
to earlier as well as contemporary systems. But this simplicity itself
might very well be the main reason behind its success in delivering
hypermedia functionality to the general public.
The architecture has undergone a gradual development much like the
architecture of any other software. The monolithic systems were not
too keen on acknowledging the existence of other systems. Since then,
things have changed radically, and the systems of today are designed
to import and export data from and to a variety of formats. The common
denominator for import and export is often W3C standards such as SGML
or derivatives like XML or HTML. Add to this the ability of systems to
better allow reuse of functionality across different systems.
It is also worth noting that HTML, the basic data format of the World
Wide Web, the dominant hypermedia system in use today, keeps structure
and data together and therefore the World Wide Web is not considered
open in the hypermedia sense. Several (successful) attempts have been
made to make the World Wide Web a (component based) open hypermedia
system. All in all the area of hypermedia is a very large area of
ongoing research and there is a lot of elaborating material available
on the systems and the concepts briefly touched upon in this article.
Copyright (C) 2002 Ronnie Holm. Please email me and let me know where
this article is being used. Verbatim copying and redistribution of
this entire article is permitted in any medium if this notice is
preserved.
In a competitive world, there is a definite edge to developing applications as rapidly as possible. This can be done using PyGTK which combines the robustness of Python and the raw power of GTK. This article is a hands on tutorial on building a scientific calculator using pygtk.
Well, let me quote from the PyGTK source distribution:
We are going to build a small scientific calculator using pygtk. I will explain each stage, in detail. Going through each step of this process will help one to get acquainted with pygtk. I have also put a link to the complete source code at the end of the article.
This package is available with almost every Linux distributions. My explanation would be based on Python 1.5.2 installed on a Linux RedHat 6.2 machine. It would be good if you know how to program in python. Even if you do not know python programming, do not worry ! Just follow the instructions given in the article.
Newer versions of this package is available from :
The tutorial has been divided into three stages. The code and the corresponding output are given with each stage.
First we need to create a window. Window is actually a container. The buttons tables etc. would come within this window.
Open a new file
First line is for importing the methods from the module named gtk. That means we can now use the functions present in the gtk library.
Then we make an object of type GtkWindow and name it as win. After that we set the size of the window. The first argument is the breadth and the second argument is the height. We also set the title of our window. Then we call the method by name show. This method will be present in case of all objects. After setting the parameters of a particular object, we should always call show. Only when we call the show of a particular object does it becomes visible to the user. Remember that although you may create an object logically; until you call show of that object, the object will not be physically visible.
We connect the signal delete of the window to a function mainquit. The mainquit is an internal function of the gtk by calling which the presently running application can be closed. Do not worry about signals. For now just understand that whenever we delete the window (may be by clicking the cross mark at the window top), the mainquit will be called. That is, when we delete the window, the application is also quit.
mainloop() is also an internal function of the gtk library. When we call the mainloop, the launched application waits in a loop for some event to occur. Here the window appears on the screen and just waits. It is waiting in the 'mainloop', for our actions. Only when we delete the window does the application come out of the loop.
Save the file. Quit the editor and come to the shell prompt. At the prompt type :
Remember, you should be in Xwindow to view the output.
A screen shot of output is shown below.
Let us start writing the second file, The variables The array , We insert a box into the window. To this box we insert the table. And in to this table we insert the buttons.
Corresponding Use of After the text box we insert the table in to the box. Setting the attributes of the table is trivial. The for loop inserts 4 buttons into 9 rows. The statement Finally we insert the close button to the box. Remember, Save the file and type
A screen shot of the output is given below.
Some functions are to be written to make the application do the work of calculator. This functions have been termed as the backend. These are the lines that are to be typed in to
A new variable Pressing any button (using a mouse) other than the If we press the Thus we have the complete scientific calculator ready. Just type
A snapshot of final application is given below.
The source code of the stages can be downloaded by clicking at the links below.
They have all .txt extension. Remove this extension and run the programs. For example change stage1.py.txt to stage1.py before executing.
Lot of example programs will be given in the examples directory, which come along with the pygtk package. In Linux, RehHat 6.2 you can find it under /usr/doc/pygtk-0.6.4/examples/ directory. Run those programs and read their source code. This will give you ample help on developing complex applications.
Happy Programming. Good Bye !
All Qubism cartoons are
here
at the CORE web site.
In the vast world of GUI Development Libraries there stands apart
a Library, known as 'Qt' for C++ developed by Trolltech AS. 'Qt' was
commercially introduced in 1996 and since then many of the
sophisticated user interfaces have been developed using this Library
for varied applications. Qt is cross-platform as it supports MS/Windows,Unix/X11
(Linux, Sun Solaris, HP-UX, Digital Unix, IBM AIX, SGI IRIX and many
other flavors),Macintosh ( Mac OS X ) and Embedded
platforms. Apart from this 'Qt' is object oriented, component based
and has a rich variety of widgets available at the disposal of a
programmer to choose from. 'Qt' is available in its commercial
versions as 'Qt Professional' and 'Qt Enterprise Editions'. The free
Edition is the non-commercial version of Qt and is freely available
for download (www.trolltech.com). First of all you need to download the
library, i assume that you have downloaded the Qt/X11 version for
Linux as the examples will be taken for the same. You might require the superuser
privlileges to install, so make sure you are 'root'. Let's untar it into /usr/local
directory : [root@Linux local]# tar -zxvf
qt-x11-free-3.0.1 [root@Linux local]# cd
qt-x11-free-3.0.1 Next you will need to compile and
install the library with the options you require to use.'Qt' Library
can be compiled with custom options suiting your needs.We will
compile it so that we get gif reading, threading , STL, remote
control, Xinerama,XftFreeType (anti-aliased font) and X Session
Management support apart from the basic features. Before we proceed further, remember to
set some environment variables that point to the correct location as
follows: QTDIR=/usr/local/qt-x11-free-3.0.1 export QTDIR PATH MANPATH
LD_LIBRARY_PATH You can include this information in
your .profile in your home directory. [root@Linux qt-x11-free-3.0.1]#
./configure -qt-gif -thread -stl -remote -xinerama -xft -sm [root@Linux qt-x11-free-3.0.1]# make
install If all goes well, you will have the
'Qt' library installed on your system. In order to start writing programs in
C++ using the 'Qt' library you will need to understand some important
tools and utilities available with 'Qt' Library to ease you job. Qmake let's you generate makefiles with the information based on a
'.pro' file. A simple project file looks something like this:
Here, 'SOURCES' can be used to define all the implementation source
for the application, if you have more than one source file you can
define them like this: SOURCES = hello.cpp newone.cpp
Similarly 'HEADERS' let's you specify the header files belonging to
your source.The 'CONFIG' section facilitates to give qmake info about
the application configuration.This Project file's name should be the
same as the application's executable. Which in our case is
'hello.pro'. The Makefile can be generated by issuing the command: Qt Designer is a tool that let's you visually design and code user
interfaces using the 'Qt' Library. The WYSIWYG interface comes in
very handy for minutely tweaking the user interface and experimenting
with various widgets.The Designer is capable of generating the entire
source for the GUI at any time for you to enhance further. You will
be reading more about the 'Qt Designer' in the articles that will
follow.
Let's begin by understanding a basic 'Hello World' Program.Use any
source editor of your choice to write the following code: #include <qapplication.h> int main( int argc, char **argv ) Save this code as a plain text file('hello.cpp'). Now let's
compile this code by making a project file (.pro) as follows: TEMPLATE = app Let's save this file as 'hello.pro' in the same directory as that
of our source file and continue with the generation of the Makefile.
Compile it using 'make'
Xlib is a library that allows you to draw graphics on the screen
of any X server, local or remote, using
the C language. All you need to do is include <X11/Xlib.h>, link
your program using the -lX11 switch, and you are ready to
use any of the functions in the library.
For example, say you want to create and show a window on
your local machine. You can write the following:
You can compile the program with the following command:
and voilą, you have a window on your screen for 10 seconds:
The purpose of this article is to show you some simple classes
that you can use when developing Xlib applications. We will create
an example application that has a window with one button on it.
The button will be a custom button we develop using only
the Xlib library.
You might be asking yourself "why don't we just use a widget library,
like QT, or
GTK?".
These are valid questions. I use QT, and find it very
useful when developing C++ applications targeted for the Linux platform.
The reason I created these classes was to get a better understanding
of the X Window System. It forced me to figure out exactly what was going
on under the hood in libraries like QT and GTK. Once I had finished,
I realized that the classes I created were actually useful.
So hopefully you will find this article educational, and be able to
use the classes presented in your own applications.
Now let's dive into some code. We'll go over some basic features
of Xlib in this section.
The first class I created was the
display class,
which was in charge of opening and closing a display. You'll
notice that in example1.cpp, we don't close our display properly
with XCloseDisplay(). With this class, it will be closed before
the program exits. Our example now looks like this:
Nothing spectacular, really. Just opens and closes a display.
You'll notice in the
implementation
that the display class defines the Display* operator, so all
you have to do is cast the object to get the actual
Xlib Display pointer.
Also notice the try/catch block. All of the classes in this article
throw custom exceptions to signal error conditions.
Next I wanted to make window creation easier, so I added a
window class to the
mix. This class creates and shows a window in its constructor,
and destroys the window in its destructor. Our example now
looks like this(pay no attention to the event_dispatcher class,
we will go over that next):
Notice that our main_window class inherits from
xlib::window. When we create the main_window object, the
base class' constructor gets called, which creates the
actual Xlib window.
You probably noticed the
event_dispatcher
class in the last example. This class takes events off of the
application's queue, and dispatches them to the correct
window.
This class is defined as the following:
The event_dispatcher passes events to window classes via the
window_base interface.
All of the classes in this article that represent windows derive
from this class, and are able to catch messages from the dispatcher.
Once they register themselves with the register_window
method, they start receiving messages.
window_base is
declared as the following, and all classes deriving from it
must define these methods:
Let's see if this actually works. We will try to handle a ButtonPress
event in our window. Add the following code to our main_window
class:
Compile the code, run the example, and click inside of the
window. It works! The event_dispatcher gets a ButtonPress
message, and sends it to our window via the predefined
on_left_button_down method.
Next let's try to draw in our window. The X Window system defines
the concept of a "graphics context" that you draw into, so I
naturally created a class named graphics_context. The following
is the class' definition:
You pass this class a window id, and a display object, and
then you can draw as much as you want using the drawing
methods. Let's try it out. Add the following to our example:
The on_expose() method is called whenever the window
is displayed, or "exposed". In this method we draw a line and
some text in the window's client area. When you compile and run this
example, you should see something similar to the following:
The graphics_context class is used extensively in the
rest of this article.
You may also notice a few helper classes in the above code,
point and line. These
are small classes I created, all having to do with shapes. They
don't look like they are necessary now, but they will be helpful
later on if I have to perform complex operations with them, like
transformations. For example, it is easier to say "line.move_x(5)",
than to say "line_x += 5; line_y += 5;". It is much cleaner, and
less error-prone.
Enough of the simple stuff - now let's move on to creating actual
widgets that can be reused. Our focus now will be on
creating a command button that we can use in an application.
The requirements of this button are as follows:
This seems like a simple control, but implementing all of this will
be more than trivial. The following sections describe this.
First off, we have to create a separate window for this command
button. The constructor calls the show method, which in turn calls
the create method, which is responsible for window creation:
Looks alot like the window class' constructor, doesn't it?
First it creates the window with the Xlib API XCreateSimpleWindow(),
then it registers itself with the event_dispatcher so it will receive
events, and finally it sets its background.
Notice that we pass the parent window's id into the call to
XCreateSimpleWindow(). We are telling Xlib that we want our
command button to be a child window of the parent.
Because the command button registered itself with the event_dispatcher,
it will receive on_expose() events when it needs to draw
itself. We will use the graphics_context class to draw both
states.
The following is the code that will be used for the "not
pressed" state:
When we finally compile and run this code later on, the button
will look like this:
Alternatively, when the button is pressed, the following
code will be used to draw it:
And the finished product will appear like the following:
This seems like a pretty simple task - draw the "pressed" state
when the mouse is down over the control, and draw the "not pressed"
state when the mouse is up. This isn't entirely correct, though.
When you press and hold the left mouse button over our control,
and move the mouse out of the rect, the command button should draw
the "not pressed" state, even though the left mouse button is
currently pressed.
The command_button
class uses two member variables to handle this - m_is_down,
and m_is_mouse_over. Initially, when the mouse is pressed down
over our control(see on_left_button_down()), we put ourselves into
the down state, and refresh the control. This results in the command button
drawing itself pressed. If, at any time, the mouse moves out of the rect
of our control(see on_mouse_exit()), m_is_mouse_over
is set to false, and the control
is refreshed. This results in the command button drawing itself in
the "not pressed" state. If the mouse then moves into the rect of the
control, m_is_mouse_over is toggled back to true, and the control
is drawn pressed. Once the mouse button is released, we set ourselves to
the "not pressed" state, and refresh ourselves.
This is a pretty simple task. We basically want the user of this
command button to be able to get and set the text displayed.
Here is the code:
The refresh() is in there so that the controls redraws itself
with the new text.
We want the user of this command button to know when we
were clicked. To do this, we will generate an "on_click()"
event. The following is the definition of the
command_button_base class:
What we are basically saying here is that "we support all events
that a window does, plus one more - on_click()". The user
of this button can derive a new class from it, implement
the on_click() method, and take the appropriate action.
I really hope you enjoyed this article. We went over many
features of Xlib, and wrapped them in C++ classes to
make Xlib development easier in the future. If you have any
questions, comments, or suggestions about this article,
or about Xlib development in general, please feel free
to email me.
Franck Alcidi
Franck is an artist in Australia. His home page ("Artsolute Linux") is http://www.artsolute.net.
Copyright © 2002, Franck Alcidi.
Copying license http://www.linuxgazette.net/copying.html
Published in Issue 78 of Linux Gazette, May 2002
"Linux Gazette...making Linux just a little more fun!"
Help Dex
By Shane Collinge
[These cartoons are scaled down to fit into LG.
To see a panel in all its clarity, click on it. -Editor (Iron).]
Shane Collinge
Part computer programmer, part cartoonist, part Mars Bar. At night, he runs
around in a pair of colorful tights fighting criminals. During the day... well,
he just runs around. He eats when he's hungry and sleeps when he's sleepy.
Copyright © 2002, Shane Collinge.
Copying license http://www.linuxgazette.net/copying.html
Published in Issue 78 of Linux Gazette, May 2002
"Linux Gazette...making Linux just a little more fun!"
A Trip Down Hypermedia Lane
By Ronnie Holm
1940s: Vannevar Bush and the Memex
1960s: Douglas Engelbart and the NLS
1960s: Ted Nelson and the Xanadu
Hypermedia architectures
Monolithic systems
Client/server systems
Open Hypermedia Systems
Component Based OHS's
Summary
Bibliography
pub/vbush/vbush.shtml
Publications/Construct/sac99.pdf
poetry/xanadu.html
Publications/Hyperform/echt92.pdf
Copyright © 2002, Ronnie Holm.
Copying license http://www.linuxgazette.net/copying.html
Published in Issue 78 of Linux Gazette, May 2002
"Linux Gazette...making Linux just a little more fun!"
Rapid application development using PyGTK
By Krishnakumar R.
1. What is PyGTK ?
"This archive contains modules that allow you to use gtk in Python
programs. At present, it is a fairly complete set of bindings.
Despite the low version number, this piece of software is quite
useful, and is usable to write moderately complex programs."
- README, pygtk-0.6.4
2. What are we going to do ?
3. Packages and Basic knowledge you should have
4. Let us start
5. Stage 1 - Building a Window
stage1.py
, using an editor. Write in the following lines to it :
from gtk import *
win = GtkWindow()
def main():
win.set_usize(300, 350)
win.connect("destroy", mainquit)
win.set_title("Scientific Calculator")
win.show()
mainloop()
main()
python stage1.py
6. Stage 2 - Building the table and buttons
stage2.py
. Write the following code to file stage2.py
.
from gtk import *
rows=9
cols=4
win = GtkWindow()
box = GtkVBox()
table = GtkTable(rows, cols, FALSE)
text = GtkText()
close = GtkButton("close")
button_strings=['hypot(','e',',','clear','log(','log10(','pow(','pi','sinh(','cosh(','tanh(','sqrt(','asin(',
'acos(','atan(','(','sin(','cos(','tan(',')','7','8','9','/','4','5','6','*','1','2','3','-', '0','.','=','+'
]
button = map(lambda i:GtkButton(button_strings[i]), range(rows*cols))
def main():
win.set_usize(300, 350)
win.connect("destroy", mainquit)
win.set_title("Scientific Calculator")
win.add(box)
box.show()
text.set_editable(FALSE)
text.set_usize(300,1)
text.show()
text.insert_defaults(" ")
box.pack_start(text)
table.set_row_spacings(5)
table.set_col_spacings(5)
table.set_border_width(0)
box.pack_start(table)
table.show()
for i in range(rows*cols) :
y,x = divmod(i, cols)
table.attach(button[i], x,x+1, y,y+1)
button[i].show()
close.show()
box.pack_start(close)
win.show()
mainloop()
main()
rows
and cols
are used to store the number of rows and columns, of buttons, respectively. Four new objects -- the table, the box, the text box and a button are created. The argument to GtkButton
is the label of the button. So close
is a button with label as "closed".
button_strings
is used to store the label of buttons. The symbols that appear in the keypad of scientific calculator are used here. The variable button
is an array of buttons. The map
function creates rows*cols number of buttons. The label of the button is taken from the array button_strings
. So the i
the button will have the i
th string from button_strings
as label. The range of i
is from 0 to rows*cols-1.
show
of window, table and buttons are called after they are logically created. With win.add
we add the box to the window.
text.set_editable(FALSE)
will set the text box as non-editable. That means we cannot externally add anything to the text box, by typing. The text.set_usize
, sets the size of the text box and the text.insert_defaults
inserts the null string as the default string to the text box. This text box is packed into the starting of the box
.
y,x = divmod(i, cols)
would divides the value of i by cols and, keeps the quotient in y and the remainder in x.
pack_start
would insert the object to the next free space available within the box.
python stage2.py
7. Stage 3 - Building the backend for the calculator
scical.py
. This is the final stage. The scical.py
contains the finished output. The program is given below :
from gtk import *
from math import *
toeval=' '
rows=9
cols=4
win = GtkWindow()
box = GtkVBox()
table = GtkTable(rows, cols, FALSE)
text = GtkText()
close = GtkButton("close")
button_strings=['hypot(','e',',','clear','log(','log10(','pow(','pi','sinh(','cosh(','tanh(','sqrt(','asin(','acos(','atan(','(','sin(','cos(','tan(',')','7','8','9','/','4','5','6','*','1','2','3','-', '0','.','=','+']
button = map(lambda i:GtkButton(button_strings[i]), range(rows*cols))
def myeval(*args):
global toeval
try :
b=str(eval(toeval))
except:
b= "error"
toeval=''
else : toeval=b
text.backward_delete(text.get_point())
text.insert_defaults(b)
def mydel(*args):
global toeval
text.backward_delete(text.get_point())
toeval=''
def calcclose(*args):
global toeval
myeval()
win.destroy()
def print_string(args,i):
global toeval
text.backward_delete(text.get_point())
text.backward_delete(len(toeval))
toeval=toeval+button_strings[i]
text.insert_defaults(toeval)
def main():
win.set_usize(300, 350)
win.connect("destroy", mainquit)
win.set_title("Scientific Calculator: scical (C) 2002 Krishnakumar.R, Share Under GPL.")
win.add(box)
box.show()
text.set_editable(FALSE)
text.set_usize(300,1)
text.show()
text.insert_defaults(" ")
box.pack_start(text)
table.set_row_spacings(5)
table.set_col_spacings(5)
table.set_border_width(0)
box.pack_start(table)
table.show()
for i in range(rows*cols) :
if i==(rows*cols-2) : button[i].connect("clicked",myeval)
elif (i==(cols-1)) : button[i].connect("clicked",mydel)
else : button[i].connect("clicked",print_string,i)
y,x = divmod(i, 4)
table.attach(button[i], x,x+1, y,y+1)
button[i].show()
close.show()
close.connect("clicked",calcclose)
box.pack_start(close)
win.show()
mainloop()
main()
toeval
has been included. This variable stores the string that is to be evaluated. The string to be evaluated is present in the text box, at the top. This string is evaluated when the =
button is pressed. This is done by calling the function
myeval
. The string contents are evaluated, using python function eval
and the result is printed in the text box. If the string cannot be evaluated (due to some syntax errors), then a string 'error' is printed. We use the try and except for this process.
clear
, the close
and the =
, will trigger the function print_string
. This function first clears the text box. Now it appends the string corresponding to the button pressed, to the variable toeval
and then displays toeval
in the text box.
close
button then, the function calcclose
is called, which destroys the window. If we press the clear
button then the function mydel
is called and the text box is cleared. In the function main, we have added the 3 new statements to the for loop. They are for assigning the corresponding functions to the buttons. Thus the =
button is attached to myeval
function, the clear
is attached to mydel
and so on.
python scical.py
at the shell prompt and you have the scientific calculator running.
8. Conclusion
Krishnakumar R.
Krishnakumar is a final year B.Tech student at Govt. Engg. College Thrissur,
Kerala, India. His journey into the land of Operating systems started with
module programming in linux . He has built a routing operating system by name
GROS.(Details available at his home page:
www.askus.way.to ) His other
interests include Networking, Device drivers, Compiler Porting and Embedded systems.
Copyright © 2002, Krishnakumar R..
Copying license http://www.linuxgazette.net/copying.html
Published in Issue 78 of Linux Gazette, May 2002
"Linux Gazette...making Linux just a little more fun!"
Qubism
By Jon "Sir Flakey" Harsem
[These cartoons are scaled down to fit into LG.
To see a panel in all its clarity, click on it. -Editor (Iron).]
Jon "SirFlakey" Harsem
Jon is the creator of the Qubism cartoon strip and current
Editor-in-Chief of the
CORE News Site.
Somewhere along the early stages of
his life he picked up a pencil and started drawing on the wallpaper. Now
his cartoons appear 5 days a week on-line, go figure. He confesses to
owning a Mac but swears it is for "personal use".
Copyright © 2002, Jon "Sir Flakey" Harsem.
Copying license http://www.linuxgazette.net/copying.html
Published in Issue 78 of Linux Gazette, May 2002
"Linux Gazette...making Linux just a little more fun!"
GUI Programming in C++ using the Qt Library, part 1
By Gaurav Taneja
Getting Started
PATH=$QTDIR/bin:$PATH
MANPATH=$QTDIR/man:$MANPATH
LD_LIBRARY_PATH=$QTDIR/lib:$LD_LIBRARY_PATH
Your First Steps With 'Qt'
Qmake
SOURCES = hello.cpp
HEADERS = hello.h
CONFIG += qt warn_on release
TARGET = hello
or alternatively by:
SOURCES += hello.cpp
SOURCES += newone.cpp
[root@Linux mydirectory]# qmake -o Makefile hello.pro
Qt Designer
Hello World!
#include <qpushbutton.h>
{
QApplication a(
argc, argv );
QPushButton hello( "Hello world!", 0
);
hello.resize( 100, 30 );
a.setMainWidget( &hello
);
hello.show();
return a.exec();
}
CONFIG += qt warn_on release
HEADERS
=
SOURCES = hello.cpp
TARGET = hello[root@Linux mydirectory]# qmake -o Makefile hello.pro
[root@Linux mydirectory]# make
You are now ready to test your first 'Qt' Wonder. Provided you are in 'X', you can launch the
program executable.
[root@Linux mydirectory]# ./hello
You should see something like this:
Let's understand the individual chunks of the code we've written.
The First two lines in our code include the QApplication and QPushButton class definitions.
Always remember that there has to be just one QApplication object in your entire Application.
As with other c++ programs, the main() function is the entry point to your program and
argc is the number of command-line arguments while argv is the array of command-line arguments.
Next you pass these arguments received by Qt as under:
QApplication a(argc, argv)
Next we create a QPushButton object and initialize it's constructor with two arguments, the
label of the button and it's parent window (0 i.e., in it's own window in this case).
We resize our button with the following code:
hello.resize(100,30);
Qt Applications can optionally have a main widget associated with it.On closure of the main
widget the Application terminates.
We set our main widget as:
a.setMainWidget( &hello );
Next, we set our main widget to be visible. You have to always call show() in order to make
your widget visible.
hello.show();
Next we will finally pass the control to Qt. An important point to be noted here is that exec()
keeps running till the application is alive and returns when the application exits.
Gaurav Taneja
I work as a Technical Consultant in New Delhi,India in Linux/Java/XML/C++.
I'm Actively involved in open-source projects, with some hosted on
SourceForge. My favorite leisure activities include long drives, tennis,
watching movies and partying. I also run my own software consulting company
named BroadStrike Technologies.
Copyright © 2002, Gaurav Taneja.
Copying license http://www.linuxgazette.net/copying.html
Published in Issue 78 of Linux Gazette, May 2002
"Linux Gazette...making Linux just a little more fun!"
Xlib Programming in C++
By Rob Tougher
1. Introduction
#include <X11/Xlib.h>
#include <unistd.h>
main()
{
// Open a display.
Display *d = XOpenDisplay(0);
if ( d )
{
// Create the window
Window w = XCreateWindow(d, DefaultRootWindow(d), 0, 0, 200,
100, 0, CopyFromParent, CopyFromParent,
CopyFromParent, 0, 0);
// Show the window
XMapWindow(d, w);
XFlush(d);
// Sleep long enough to see the window.
sleep(10);
}
return 0;
}
prompt$ g++ test.cpp -L/usr/X11R6/lib -lX11
prompt$ ./a.out
2. Why not use a widget set?
3. The basics
3.1 Opening a display
#include <unistd.h>
#include "xlib++/display.hpp"
using namespace xlib;
main()
{
try
{
// Open a display.
display d("");
// Create the window
Window w = XCreateWindow((Display*)d,
DefaultRootWindow((Display*)d),
0, 0, 200, 100, 0, CopyFromParent,
CopyFromParent, CopyFromParent, 0, 0);
// Show the window
XMapWindow(d, w);
XFlush(d);
// Sleep long enough to see the window.
sleep(10);
}
catch ( open_display_exception& e )
{
std::cout << "Exception: " << e.what() << "\n";
}
return 0;
}
3.2 Creating a window
#include "xlib++/display.hpp"
#include "xlib++/window.hpp"
using namespace xlib;
class main_window : public window
{
public:
main_window ( event_dispatcher& e ) : window ( e ) {};
~main_window(){};
};
main()
{
try
{
// Open a display.
display d("");
event_dispatcher events ( d );
main_window w ( events ); // top-level
events.run();
}
catch ( exception_with_text& e )
{
std::cout << "Exception: " << e.what() << "\n";
}
return 0;
}
3.3 Handling events
class event_dispatcher
{
// constructor, destructor, and others...
[snip...]
register_window ( window_base *p );
unregister_window ( window_base *p );
run();
stop();
handle_event ( event );
}
virtual void on_expose() = 0;
virtual void on_show() = 0;
virtual void on_hide() = 0;
virtual void on_left_button_down ( int x, int y ) = 0;
virtual void on_right_button_down ( int x, int y ) = 0;
virtual void on_left_button_up ( int x, int y ) = 0;
virtual void on_right_button_up ( int x, int y ) = 0;
virtual void on_mouse_enter ( int x, int y ) = 0;
virtual void on_mouse_exit ( int x, int y ) = 0;
virtual void on_mouse_move ( int x, int y ) = 0;
virtual void on_got_focus() = 0;
virtual void on_lost_focus() = 0;
virtual void on_key_press ( character c ) = 0;
virtual void on_key_release ( character c ) = 0;
virtual void on_create() = 0;
virtual void on_destroy() = 0;
class main_window : public window
{
public:
main_window ( event_dispatcher& e ) : window ( e ) {};
~main_window(){};
void on_left_button_down ( int x, int y )
{
std::cout << "on_left_button_down()\n";
}
};
3.4 Drawing
class graphics_context
{
public:
graphics_context ( display& d, int window_id );
~graphics_context();
void draw_line ( line l );
void draw_rectangle ( rectangle rect );
void draw_text ( point origin, std::string text );
void fill_rectangle ( rectangle rect );
void set_foreground ( color& c );
void set_background ( color& c );
rectangle get_text_rect ( std::string text );
std::vector
#include "xlib++/display.hpp"
#include "xlib++/window.hpp"
#include "xlib++/graphics_context.hpp"
using namespace xlib;
class main_window : public window
{
public:
main_window ( event_dispatcher& e ) : window ( e ) {};
~main_window(){};
void on_expose ()
{
graphics_context gc ( get_display(),
id() );
gc.draw_line ( line ( point(0,0), point(50,50) ) );
gc.draw_text ( point(0, 70), "I'm drawing!!" );
}
};
4. Advanced - creating a command button from scratch
4.1 Requirements of the button
4.2 Giving it its own window
virtual void create()
{
if ( m_window ) return;
m_window = XCreateSimpleWindow ( m_display, m_parent.id(),
m_rect.origin().x(),
m_rect.origin().y(),
m_rect.width(),
m_rect.height(),
0, WhitePixel((void*)m_display,0),
WhitePixel((void*)m_display,0));
if ( m_window == 0 )
{
throw create_button_exception
( "could not create the command button" );
}
m_parent.get_event_dispatcher().register_window ( this );
set_background ( m_background );
}
4.3 Implementing "pressed" and "not pressed" drawn states
// bottom
gc.draw_line ( line ( point(0,
rect.height()-1),
point(rect.width()-1,
rect.height()-1) ) );
// right
gc.draw_line ( line ( point ( rect.width()-1,
0 ),
point ( rect.width()-1,
rect.height()-1 ) ) );
gc.set_foreground ( white );
// top
gc.draw_line ( line ( point ( 0,0 ),
point ( rect.width()-2, 0 ) ) );
// left
gc.draw_line ( line ( point ( 0,0 ),
point ( 0, rect.height()-2 ) ) );
gc.set_foreground ( gray );
// bottom
gc.draw_line ( line ( point ( 1, rect.height()-2 ),
point(rect.width()-2,rect.height()-2) ) );
// right
gc.draw_line ( line ( point ( rect.width()-2, 1 ),
point(rect.width()-2,rect.height()-2) ) );
gc.set_foreground ( white );
// bottom
gc.draw_line ( line ( point(1,rect.height()-1),
point(rect.width()-1,rect.height()-1) ) );
// right
gc.draw_line ( line ( point ( rect.width()-1, 1 ),
point ( rect.width()-1, rect.height()-1 ) ) );
gc.set_foreground ( black );
// top
gc.draw_line ( line ( point ( 0,0 ),
point ( rect.width()-1, 0 ) ) );
// left
gc.draw_line ( line ( point ( 0,0 ),
point ( 0, rect.height()-1 ) ) );
gc.set_foreground ( gray );
// top
gc.draw_line ( line ( point ( 1, 1 ),
point(rect.width()-2,1) ) );
// left
gc.draw_line ( line ( point ( 1, 1 ),
point( 1, rect.height()-2 ) ) );
4.4 Figuring out which state to draw
4.5 Giving it a "text" property
std::string get_name() { return m_name; }
void set_name ( std::string s ) { m_name = s; refresh(); }
4.6 Generating an "on_click()" event
namespace xlib
{
class command_button_base : public window_base
{
public:
virtual void on_click () = 0;
};
};
5. Conclusion
a. References
b. Files