Archive Page 2


Is OpenSolaris in hot water?

Here’s how it works: Novell owns Unix’s IP (intellectual property). SCO sold Unix’s IP to Sun. Sun then included some Unix IP into Solaris. Finally, Sun open sourced Solaris as OpenSolaris. Sounds like trouble, doesn’t it?

While Sun‘s Chief Open Source Officer Simon Phipps described the line of logic above as “sheer speculation,” others see a major potential legal problem for Sun. However, analysts, lawyers and open source leaders also agreed that it’s unlikely Novell would ever choose to make trouble for Sun. Novell, however, has not commented on its intentions despite several attempts to get the Linux company’s take on the issue.

Thomas Carey, chairman of the business practice group at the Boston-based Bromberg & Sunstein IP law firm, describes the legal details like this: “As to Sun, SCO released Sun from a confidentiality obligation with respect to SVRX (System V Release X Unix) code when its contract with Novell did not permit it to do so without Novell’s permission. SCO did not seek or obtain that permission. This proceeding does not involve Sun as a party, only SCO and Novell. As between these parties, the court views the genie (the confidential information) to be out of the bottle, and the court can’t put it back in. It can, however, hold SCO liable to Novell for breach of contract (and/or breach of fiduciary duty), and it did so and found the damages for this breach to be $2.5-million.”

What does this mean for Sun? Carey says, “In theory, Novell could sue Sun directly, but its chances of success would be slim. Furthermore, Novell is not interested in pursuing/developing SVRX, and is more interested in its reputation in the open source community. Its lawsuit against SCO was political — it got to wear the white hat. If it went after Sun because of OpenSolaris, it would wear the black hat. It is not likely to change hats now.”

Jay Lyman, an open source analyst for The 451 Group, also can’t see Novell siccing its lawyers on Sun. “Novell is unlikely to overtly pursue any kind of legal strategy against Sun. It may try to use the rulings as leverage behind the scenes, but I doubt the benefits to Novell of legal strategies or threats involving Unix, Linux, open source, and Sun. Novell has arguably more to gain by focusing on growing its Linux and open source business (including work with Sun) than to do anything that remotely resembles what SCO did.”

Others agree with Lyman. “I don’t believe that OpenSolaris is in much danger.” says open source advocate Bruce Perens. “Novell would only be in another long lawsuit if it tried to pressure Sun, or tried to sell those rights to someone who would pressure Sun. Instead, I think we’ll see Sun make some quiet deal with Novell.”

Besides, Perens continues, “If pressured, Sun could buy out Novell without a problem, which would be the best end for Novell anyway.”

So, most people agree: Novell could give Sun a real legal headache, but since that wouldn’t serve the Linux company’s goals; no one can see Novell trying it. For the time being, at least, it appears that Sun will be getting a free legal pass for OpenSolaris.

by Steven J. Vaughan-Nichols from :


Games for Linux Sites

Games for Linux

Portal Games for Linux was founded in the year 2001 by Juraj Michálek from Slovakia. There were only small number of games for Linux spreaded over Internet. When someone wants to play these games he had to download all libraries and recomplie game. It was hard to download and racompile game because a lot of web-links were broken and source files were sometimes inaccessible. So running games required a lot of effort. The basic idea of the portal Games for Linux was to collect game’s tarballs and all libraries required by games and put them in to one place. The second idea of this portal was – provide porting skills to Linux newbies and help them to port their games from Dos, Windows or other OS to Linux (and also backward to Windows).

A lot of work has been done. 🙂 Now you can enjoy Linux games. In future we are planning to build rpm and other binary packages to each game. Games are now available only in source tarball.

Linux Games

Linux Games

Linux Games


Linux Gamers is one of the biggest linux-gaming communities with an integrated multi-gaming-clan. On this page you will find latest Linux gaming related news, a big moderated forum, a huge HOWTO collection and other information about the community and the clan.

Tux Games

Are you keeping an unwanted operating system on your computer just for playing games? Well, now there is no need. Tux Games will free your computer from its shackles and allow you to play games under your favorite operating system.

Tux Games is dedicated to providing Linux users with the best Linux games at the lowest possible prices.

If you are new to gaming with Linux, we can provide you with all you need to turn your PC into a stable and reliable gaming platform.

The Linux Game Tome

The Linux Game Tome is whole bunch of really badly written Perl scripts that talk to a MySQL database. We use phpBB for the forums. The server is running Redhat Linux, although we’d prefer to be running Debian (but we like Redhat just fine, honest!). The server hardware and bandwidth are donated by Penguin Computing, who are too cool for words because of their willingness to help this site continue. Please buy your next Beowulf cluster from them.

and you will find so much things on


Mozilla fixes nine flaws in Thunderbird

Updates e-mail program to to patch bugs handled in Firefox weeks ago

Mozilla Messaging patched nine security vulnerabilities in Thunderbird yesterday, the first time it has plugged holes in the e-mail software since early May.

Thunderbird, which was added to Mozilla’s download servers late Wednesday, quashes nine bugs, including one that was patched last week in Firefox, the company’s open-source browser. The remainder fix flaws that were first addressed in early July when Mozilla updated Firefox to Version

It’s not unusual for Thunderbird security updates to lag behind those released for Firefox.

Seven of the nine bugs were rated “moderate” by Mozilla, the second-lowest of the four rankings in its threat system. The other two were pegged as “low.”

The bug patched in Thunderbird yesterday that was fixed in Firefox last week was in the browser rendering engine’s CSSValue array data structure. According to Mozilla, the vulnerability could be used by hackers to force a crash, and from there, run malicious code. Several other just-patched Thunderbird vulnerabilities could also be used by attackers to execute code remotely.

Thunderbird 2.x, like its browser sibling, is on the way out. Most of Mozilla’s attention is now on Thunderbird 3.0, which has been available as an Alpha 1 preview for more than two months.

Users can download Thunderbird in versions for Windows, Mac OS X and Linux from the Mozilla site, call up the e-mail client’s built-in updater or wait for the automatic update notification, which typically appears within 24 to 48 hours.

by Gregg Keizer from :


What Microsoft can do for Open Source

This morning Sam Ramji gave one of the closing keynote presentations at OSCON 2008. He talked about writing a new chapter in Microsoft’s history with the open source community, and he promised to talk openly and honestly with us. It is a promise that he made to me personally when I met him between sessions a few days earlier. He also made a commitment to engage in difficult conversations about tough issues. And he announced some other concrete ways that Microsoft was reaching out to the open source community. But the subtext of all these commitments seemed to me to be a deeper question that Sam is trying to answer: what can Microsoft do to make peace and partner with the open source community?

In the past I’ve advocated for fair treatment of Microsoft with respect to the license approval process. And after approving two of Microsoft’s licenses I’ve also written posts that have been critical of what I consider to be unfair or hypocritical behavior of Microsoft (toward open source) [1] [2] [3] and [4]. When I was notified of the Sandcastle debacle I concluded that Microsoft had stooped to a new level to discredit open source, namely, to prove through its own bumbling actions that Open Source is so terribly difficult to get right that mainstream corporate America (which is clearly not as smart as Microsoft) would do best to stay clear away. But Sam tells me that I have it all wrong. And he tells me that he’s committed to proving that Microsoft can act in good faith. And I believe that Sam does believe that, which is a start.

After Sam’s keynote there were quite a number of people who wanted to probe Sam’s statements, as to their depth, breadth, felicity, and, most of all, their agreement with observable Microsoft policies and behaviors. It was not a friendly crowd. Nevertheless, there was great insight contained in many of the questions, the best of which came from Jim Blandy. Jim politely thanked Sam for co-sponsoring OSCON and newly sponsoring the Apache Software Foundation, and then he asked (something along the lines of)

It is not possible to infringe a patent by merely implementing it, so Microsoft’s promise to not assert patent claims against developers [for software within its universe of standards and interoperability] is moot. When is Microsoft going to step up and promise to not assert patents against those who distribute and practice the patent, namely the commercial open source companies and in particular, users of their software?

Sam responded that Microsoft has signed a number of agreements with various software companies to protect them and their users, an answer which inflamed rather than satisfied the audience, myself included. So what can Microsoft do for Open Source, really?

Well, let’s think big. The Open Source community already has more than a billion lines of source code at its disposal, and it’s doubling every 12.5 months, so I think it’s fair to say “we don’t really need your code”. And we also know that money alone is no substitute for the freedom to innovate that we so crave. So what big thing could we do with Microsoft’s cooperation?

There are really four things on my list, but if they did only the first, it would be a meaningful start. The list is:

  1. Pursue the abolition of software patents with the same zeal they showed in their efforts to get OOXML approved as a standard.
  2. Unilaterally promise to not use the DMCA to maintain control of their Trusted Computing Platform.
  3. Transition to 100% open standards (as defined by the OSI, IETF, W3C, or the Digistan).
  4. Stop trying to maintain their monopolies by illegal, anti-competitive means [1] [2].

We are now at a moment in history where science, logic, and economics all show that software patents are bad for the industry and bad for society. Even the US Congress is gearing up to confront a massively broken system. But Microsoft, who themselves are subject to hundreds of millions of dollars a year in frivolous patent claims against them, is not well-positioned to tackle the problem single-handedly: could you imagine Steve Ballmer testifying on Capital Hill about why we should abolish software patents and then facing all sorts of questions about why we should preclude little monopolies from having their chance to become larger monopolies? Therefore, what Microsoft cannot do without the open source community, and what we’re having a hard time doing without Microsoft, is an historic opportunity for us to work together in a meaningful way. Namely, to present a consistent front that places companies and communities together in opposition of a legal accident that should never have happened in the first place. Together we can show the patent trolls for what they are and use our privilege of living in a democracy to restore to the public what also benefits the public: the freedom to innovate.

Microsoft could try to take the easy way out and offer a patent promise like the one that Red Hat offers, and I’m sure many would like that. But why take the step of unilaterally disarming when we have the chance to disarm everybody through the legislative process? Or better yet, Microsoft should make a meaningful patent promise today, and make a commitment to fight software patents until they are abolished. This means they go off the books in the US, they don’t make it onto the books in the EU, and the rest of the world joins a new innovation commons unfettered by patent fears.

As to the DMCA and so-called Trusted Computing, I think that the SE Linux project has made it pretty clear that one can build a secure operating system without resorting to secrets at the implementation or interoperability level.

The other two points on the list are self-explanatory. Be a good corporate citizen and then everything you win (and everything you lose) is all to the good for the good of all. At least that’s what I understand about fair and open competition.

So, as a first step toward a meaningful relationship with the Open Source community, would Microsoft join us in our long-standing effort to abolish software patents? If so, Microsoft can count me as a partner in their efforts. Please add a comment with your name and affiliation if you want to join me in extending that invitation as well.

by Michael Tiemann from :


Flex and PHP

Flex and PHP builds highly-interactive, media-rich and Internet-connected applications that can run in every Web browser.

If you’re a PHP developer, you enjoy many benefits: PHP is object-oriented; PHP applications deploy easily; and the PHP community at-large produces a vast array of classes, libraries, and tools that ease and facilitate coding.

But PHP applications (as well as applications constructed in all other Web programming languages, including Perl, Python, and Java) decidedly lack interactivity. For example, even the simple task of completing a wizard-like Web interface, say, to purchase an item from an online store, requires many round-trips to the server. Fetching each form in the series requires a round-trip, as does each step of validation. And while Web surfers have come to expect such page refreshes and delays in Web applications, the end-result is the nonetheless unsatisfying: the user experience of the Web is typically inferior to what can be done on the desktop.

Or was inferior. Now many parts of the Web look, feel, and act like shrinkwrapped software. The sea change? A variety of techniques and technologies. But which is the right one?

In the past year or so, JavaScript and XML, the so-called “AJAX,” have improved the interactivity of Web applications. However, AJAX is not a panacea. Browsers remain inconsistent, translating to a herculean effort to validate an application. It’s common to find a morass of HTML and JavaScript exceptions for Internet Explorer 6, Internet Explorer 7, Firefox, and Safari, just to name a few of the divergent browsers.

AJAX security is also an issue. The article “JavaScript Hacking”, online at is a well-known exploit found in Yahoo IO, Prototype, Dojo, and Mochikit, and more. Finally, while AJAX may animate dialogs, it’s difficult to mix rich media — video, animation, music, and sound effects — into the user interface.

Ideally, the features of a rich Web application would be apportioned to match the strengths of each service tier. Data retrieval and persistence would remain the purview of the database engine (for example, MySQL or SQL Server). Business logic would be implemented in PHP on a central server. PHP would process incoming user input and effectuate change in the database. PHP would also retrieve data and return new information to the user. That leaves the third tier — the client — to interact with the user. More specifically, and in the context of a Web application, “the client” is likely the browser.

But if the browsers are notoriously mercurial, and it’s cost-prohibitive to provide custom software on every combination of operating system and processor, how can the ideal be realized? How can a single application look and feel great, and run on every platform?

The answer? Flash — and an entirely new tool set named Adobe Flex. Flex is a set of tools and technologies that you combine to construct, deliver, and run rich, sophisticated, and snappy Web applications. Flex can add a sexy widget in a larger page, or can entirely replace your PHP application’s user interface. Flex runs in Flash Player — found in nearly every Web browser on the planet. (Flash Player 9 or higher is required to run Flex applications.)

Even better, many Flex components are about to be released as open source according to the terms of the Mozilla Public License (MPL).

Building a Better (and Simpler) Web Application

Figures One and Two summarize the distribution of responsibilities in a traditional PHP Web application and in a Flex application, respectively.

FIGURE ONE: The traditional PHP programming model

Figure One (ignoring the addition of AJAX) depicts the classic behavior of PHP applications. Initiated by user action, the browser connects to a Web server running PHP and requests a page. Business logic in the PHP code responds to the request, a process that may require validation of incoming data, data retrieval from the persistent store, computation, and the rendering of a result in HTML, perhaps customized to the user’s browser, since browsers are irregular. Both the client and server are stateless, which requires both to regenerate the state of the application every time.

FIGURE TWO: The Flex and PHP programming model

Contrast Figure One with Figure Two, which depicts the Flex model. In Figure Two, the role of the Web server is greatly simplified, limited to data manipulation (read/write) and computation. The server’s output is neutral XML. On the client, the browser runs the Flash application in an isolated Flash Player environment. The Flex application — which could be a single element in a larger, traditional Web page or a complete Web site — renders the user interface, reacts to mouse clicks and other input events, and changes from one interface “screen” to another (the various states of a user interface are called view states in the parlance of Flex). The Flex application is stateful, unburdening the server from rework. The style, content, and visuals of the application are essentially boundless, limited to the capabilities of the Flash Player. And the application runs consistently on any platform that Flash Player has been ported to, including Linux, Mac OS X, and Windows.

Introducing Flex

Flex is a set of tools and technologies that you combine to construct, deliver, and run rich, sophisticated, and snappy Web applications. Flex can add a sexy widget in a larger page, or can entirely replace your PHP application’s user interface.

All Flex applications are deployed via Flash Player, and leverage familiar and mature standards, including XML, web services, and HTTP. XML describes data and is used to relay between the application and the server (s) web services provide the infrastructure; you can also use HTTP directly to connect to URLs.

Flex includes all of the software required to build an application:

*The Flex Framework is a large collection of classes used to build rich Internet applications. The Flex Framework, like other user interface class libraries, provides forms, menus, media players, buttons, and smart layout containers. Further, the Flex Framework furnishes controls usually found in desktop applications, including a rich text editor, a color picker, and a date chooser. You can also change the cursor, apply visual effects and animations, and enable drag-and-drop in your software.

Additionally, the Flex Framework provides components to execute remote procedure calls, and format and validate data. Moreover, the Flex Framework manages the state of your user interface. At a macro scale, a view state can be a pane in a wizard-like form. At a micro scale, a state can reflect the current visual appearance of a control, such as unavailable or “checked”. You can traverse the history of state changes, allowing navigation through a series of ordering screens.

While many AJAX toolkits provide a pool of similar controls, those found in the Flex Framework are written in ActionScript, a true object-oriented programming (OOP) language, and work flawlessly in all browsers on all platforms.

The Flex Framework is part of the free — and open source — Flex SDK. The SDK includes the Flex class library and a suite of command-line tools to build and debug Flex applications. You can download the Flex SDK from You can read about Adobe’s plans to open source Flex at

*Flex Builder is a robust integrated development environment (IDE) for Flex applications. Flex Builder is based on Eclipse, and you can choose to install either the plug-in version of Flex Builder or the standalone version of the tool. (The former assumes you already have a working Eclipse setup; the latter installs everything you need to run Flex Builder and has no prerequisites.) Using Flex Builder, you can interactively build user interfaces, debug your code, and build applications from scratch. Figure Three shows Flex Builder in action.

FIGURE THREE: Figure 3: The Adobe Flex Builder application, shown edting the user interface of an online store application

While the Flex SDK is free, the Flex Builder integrated development environment is not. You can find more information about Flex Builder at

*MXML is an XML-based markup language used to describe screen layout, and is an essential part of Flex. MXML specifies how to assemble a user interface from controls, and can also describe how the user interface should behave, including view states, transitions, data models, and more. For example, the MXML snippet in Listing One produces the top two left-most panels in the interface shown in Figure Four.

LISTING ONE: A sample of MXML code

<mx:Canvas left=”10″ top=”10″ width=”100%” height=”60″
backgroundColor=”#ebebe9″ styleName=”homeSection”>
<mx:Label left=”10″ top=”10″ text=”Search Product” width=”112″ height=”22″
<mx:Button left=”168″ top=”30″ label=”Go” width=”47″ height=”20″ styleName=”glass”
click=”’This feature is not implemented in this sample’, ’Go’)”/>
<mx:TextInput left=”10″ top=”30″ height=”20″ width=”150″/>

<mx:Canvas left=”10″ top=”78″ width=”100%” height=”280″ backgroundColor=”#ffffff”
<mx:VBox left=”10″ top=”10″ width=”100%” height=”100%” verticalGap=”0″>
<mx:Label text=”Programs for Your Lifestyles” styleName=”sectionHeader”/>
<mx:HRule height=”5″ width=”197″/>
<mx:Label text=”Active” styleName=”homeProgramHeader”/>
<mx:Label text=”Product Warranty” fontSize=”9″/>
<mx:Spacer height=”8″ width=”100%”/>
<mx:Label text=”Business” styleName=”homeProgramHeader”/>
<mx:Label text=”Upgrades, Data” fontSize=”9″/>
<mx:Label text=”Traveler” styleName=”homeProgramHeader”/>
<mx:Label text=”International Roaming” fontSize=”9″/>
<mx:Label text=”Students” styleName=”homeProgramHeader”/>
<mx:Label text=”Music Downloads” fontSize=”9″/>
<mx:Label text=”Kids” styleName=”homeProgramHeader”/>
<mx:Label text=”Games, Ringtones” fontSize=”9″/>

Here, MXML organizes labels and other widgets into larger canvases. Much like HTML, attributes dictate sizes, colors, offsets, font metrics, and labels. This MXML file is compiled into ActionScript, and is then fused with the application’s assets, code and graphics, into a single Flex application. SWF file.

FIGURE FOUR: A portion of this interface was created using the snippet MXML code found in Listing One

Ultimately, all Flex software is translated to ActionScript, the native language of the Flash Player. Learning ActionScript should be a priority because most applications require procedures, custom classes, or other glue that can only be achieved in ActionScript. You can also change the cursor, apply visual effects and animations, and enable drag-and-drop in your software via ActionScript.

Figure Five pictures all of the assets that can be combined into a Flex application.

FIGURE FIVE: The construction of a Flex application

Mixing Flex and PHP

With Flex assuming the chores of user interaction within the client, PHP code can be simplified to focus solely on business logic.

As an example, consider a department store shopping application. Assuming the shopping client provides navigation among categories (sporting goods, jewelry, electronics), item selection within a category (shirt sizes, styles, and colors), and checkout (personal information, credit card information, shipping options), PHP could:

*Generate a list of available categories.

*Scan inventory for available items and create an XML-based manifest for the application to subsequently display.

*Retrieve customer information, such as preferences, previous purchases, new product recommendations, and personal information.

*Process final checkout, including billing, generating pick requests to fulfill the order, adjusting inventory levels, and assigning package tracking information to the order. If the client waits for confirmation, the outcome of the transaction (success, failure, exception) can be returned to the client via XML.

Flex can retrieve information from the Web via simple HTTP requests or via SOAP requests. (A third method, Flash Remoting, a kind of remote procedure call, is also available, and is more efficient. To use Flash Remoting, install Flex server software or one of a number of open source solutions, such as Amfphp (, WebORB for PHP ( from the Midnight Coders, and SabreAMF (

Flex applications can issue simple HTTP requests for text or XML files. If the requested file is static, the request may be handled directly by the Web server. If the file is generated dynamically, the Web server may invoke PHP to generate the reply, usually returned as raw text or XML.

SOAP requests may be appropriate if you already have an established Web service infrastructure or would like to institute such a suite of modular services. You may also choose SOAP if you plan to consume third-party Web services.

Both approaches to intercommunication are valid, and which you choose depends on your requirements and preconditions. What’s most important is your recognition that Flex applications are more akin to client/server applications than Web applications. Thinking in “pages” becomes thinking in “transactions”.

Migrating to Flex

If you’re about to create an entirely new application, dividing your software into distinct client and server fiefs can be planned from the very start. However, chances are you have a PHP code base already producing results (and revenue). In this very likely case, let Flex enhance what you’ve deployed, making your best even better.

Your Flex applications need not be large or complex. For example, if you maintain a management “dashboard” to monitor sales or web traffic, say, you can embed a rich, Flex pie chart control into a Web page. Or you can create a user registration wizard, including local (client-side) field and form validation. Thinking larger, you could also replace your shopping cart viewer, or provide a drag-and-drop target for file uploads.

Adopting Flex need not be “all or nothing.” Implement, increment, and innovate.

To learn Flex rapidly, read the new book Programming Flex 2 from O’Reilly Media. The authors, both subject matter experts, provide the expertise required to build effective and interactive Flex applications from scratch.

Flex Your PHP Muscle

Flex and PHP is a powerful combination. Flex provides a large framework for user interface and rich client development. Better yet, a Flex application running within the ubiquitous Flash Player works across all platforms and all browsers. PHP, a language you’ve already mastered, is well-suited to business logic and database access. Stripped of laborious page rendering drudgery, coding is greatly simplified. Flex and PHP truly deliver rich Internet applications.

taken from :


Wizard Boot Camp, Part Seven: /proc Process-Info

This time around, as a part of our long-running series on obscure Linux topics that wizards should know, we’ll wrap up the discussion of Linux processes with a look into the twisty (virtual) corners of the /proc pseudo-filesystem. This little-known directory is a gold mine of information about your system and its processes.

If you typically call utilities like uptime or ps to get system information from scripts, you may start using /proc from now on: your script can read /proc without invoking a new process, so it can be more efficient. One warning, though: /proc isn’t necessarily the same on every Linux system, and non-Linux systems may not have it at all. If you use /proc in a script that should be portable to other systems, check the other systems — or stick to the old standby utilities like uptime.

Introducing /proc

If you haven’t looked in /proc before, that’s a good place to start. See Listing One.

Listing One: Top level of /proc directory

$ cd /proc
$ ls -F
1/         dma         self@
10/        driver/     slabinfo
1043/      loadavg     stat
11037/     locks       swaps
11041/     meminfo     sys/
11042/     misc        sysrq-trigger
cmdline    modules     sysvipc/
cpuinfo    mounts@     uptime
crypto     mtrr        version
devices    partitions  vmstat
diskstats  scsi/       zoneinfo

We won’t describe every part of /proc here; doing that would fill most of this article’s three pages! (And, to save space, we’ve omitted a lot of the entries from Listing One.) You can get details from the proc manual page. Let’s hit some highlights.

You can treat the virtual filesystem entries in /proc as if they’re on an actual hard disk: for instance, read the files with cat or less; list symbolic links (like /proc/self) with ls -l; cd into directories or run ls on them. The sidebar Reading /proc “files” efficiently explains an efficient way to get the contents of /proc “files.”

Reading /proc “files” efficiently

From a shell script, it’s more efficient to read a file with the bash operator $(<file), which opens a file directly without starting a new process. (Using a utility like cat starts a new process to run the program.) For instance, in a shell script that’s monitoring the system load average, you could read /proc/loadavg into the array named loadavg like this:

loadavg=( $(</proc/loadavg) )

Then ${loadavg[0]} has the first load average (the one-minute value), and so on.

Most of the names are self-explanatory. The numbered directories correspond to the processes running on your system; the number is the process PID. We’ll look at those, the the special symlink named self, in the next section.

  • cpuinfo gives detailed information on the machine’s processor(s).
  • loadavg gives the 1, 5, and 15-minute load average, the number of processes currently executing, and the last PID created.
  • partitions lists the current disk partitions, including major and minor device numbers and the number of blocks.
  • sys gives detailed system performance information in a series of subdirectories such as fs (filesystem), kernel, and net.

Per-process directories

As we said, the numbered directories have information about each process on the system. (Or you may see only your own processes — and many other named entries may have permissions that only allow superusers to read them.) These make a nice alternative to the Byzantine options and output formats of ps. For instance, if you’re trying to find the PPID of process 11037 (that is, the PID of the parent that started process 11037), look in /proc/11037/ppid:

$ cat /proc/11037/ppid

Soon we’ll see more of what’s in these directories. By the way, one of those numeric directories in the ls -F output from Listing One is guaranteed not to exist anymore. Which one? It’s the directory that was created with information about that ls process itself. Once the ls process finished listing /proc, the ls process terminated, so its virtual directory in /proc vanished.

A process that needs to get information about itself can look in the numeric directory pointed to by the symbolic link /proc/self. This is worth a moment of thought before you use it. Consider this example:

$ ls /proc/self
attr     exe         oom_adj    status
auxv     fd          oom_score  task
cmdline  maps        root       wchan
cpuset   mem         smaps
cwd      mounts      stat
environ  mountstats  statm

(If you have ls aliased to run ls -F, you’ll get a result like /proc/self@ instead of the directory entries shown above. In that case, try /bin/ls /proc/self, or \ls /proc/self, to get a listing of the directory’s contents.)

Which process is that listing for: the shell that’s running ls, or for the ls process itself? Think: which process is actually reading /proc/self? Right: the ls process is reading /proc, so you’ll see information about ls in the listing.

To get information on your shell, use the $$ parameter. It expands into the current shell’s PID number. There’s an example in Listing Two for the shell whose PID happens to be 2588.

Listing Two: A shell’s own process information

$ echo /proc/$$
$ ls -l /proc/$$
total 0
dr-xr-xr-x 2 jpeek jpeek 0 2008-02-12 12:26 attr
-r-------- 1 jpeek jpeek 0 2008-02-12 12:26 auxv
-r--r--r-- 1 jpeek jpeek 0 2008-02-12 12:26 cmdline
-r--r--r-- 1 jpeek jpeek 0 2008-02-12 12:26 cpuset
lrwxrwxrwx 1 jpeek jpeek 0 2008-02-12 12:26 cwd -> /home/jpeek
-r-------- 1 jpeek jpeek 0 2008-02-12 12:26 environ
lrwxrwxrwx 1 jpeek jpeek 0 2008-02-12 12:26 exe -> /bin/bash
dr-x------ 2 jpeek jpeek 0 2008-02-12 12:26 fd
-r--r--r-- 1 jpeek jpeek 0 2008-02-12 12:26 maps
-rw------- 1 jpeek jpeek 0 2008-02-12 12:26 mem
-r--r--r-- 1 jpeek jpeek 0 2008-02-12 12:26 mounts
-r-------- 1 jpeek jpeek 0 2008-02-12 12:26 mountstats
-rw-r--r-- 1 jpeek jpeek 0 2008-02-12 12:26 oom_adj
-r--r--r-- 1 jpeek jpeek 0 2008-02-12 12:26 oom_score
lrwxrwxrwx 1 jpeek jpeek 0 2008-02-12 12:26 root -> /
-r--r--r-- 1 jpeek jpeek 0 2008-02-12 12:26 smaps
-r--r--r-- 1 jpeek jpeek 0 2008-02-12 12:26 stat
-r--r--r-- 1 jpeek jpeek 0 2008-02-12 12:26 statm
-r--r--r-- 1 jpeek jpeek 0 2008-02-12 12:26 status
dr-xr-xr-x 3 jpeek jpeek 0 2008-02-12 12:26 task
-r--r--r-- 1 jpeek jpeek 0 2008-02-12 12:26 wchan

Although the sizes list as 0 bytes, that’s deceptive: The files output whatever the current value is at the time you read them. For instance, the status “file” gives the current status of the process:

$ cat /proc/self/status
Name:   cat
State:  R (running)
SleepAVG:       88%
Pid:    22383
PPid:   22010
Groups: 1007
VmSize:     2748 kB

The contents of status are a handy alternative to reading many of the other files in the directory — which give the same information in smaller chunks.

Several of the entries are symbolic links. Reading the directory with ls -l shows each link’s target. For instance, the process’ current directory, pointed to by cwd, is /home/jpeek. (The shell’s current directory was /home/jpeek, which cat inherited when the shell started it.) The root entry points to the process’ root directory. That’s typically /, as you see here — but it can be different for a process run with chroot(2).

The fd subdirectory lists open file descriptors for the process… which leads us neatly into the next section.

The /proc/####/fd and /dev/std* subdirectories

I’ve talked before in this column about open files and file descriptor numbers. Two handy virtual parts of the Linux filesystem, the /proc/nnnn/fd and /dev/std* subdirectories, make it easy to explore these.

Let’s start with some special entries in /dev. The entries /dev/stdin, /dev/stdout, and /dev/stderr point to those open standard I/O files in the current process. These entries are actually symlinks pointing into the (virtual) /dev/fd subdirectory, as you can see by listing them:

$ ls -l /dev/std*
lrwxrwxrwx ... /dev/stderr -> fd/2
lrwxrwxrwx ... /dev/stdin -> fd/0
lrwxrwxrwx ... /dev/stdout -> fd/1

What’s in the fd subdirectory? It’s a list of the currently-open file descriptors in the process. It’s actually a symlink to the /proc/self/fd directory, which has the real information:

$ ls /dev/fd
0  1  2  3
$ ls -l /dev/fd
lrwxrwxrwx ... /dev/fd -> /proc/self/fd

Let’s look in that directory:

$ ls -l /proc/self/fd
total 0
lrwx------ ... 0 -> /dev/pts/5
lrwx------ ... 1 -> /dev/pts/5
lrwx------ ... 2 -> /dev/pts/5
lr-x------ ... 3 -> /proc/26055/fd
$ tty

(When you list that directory, you’re actually seeing the open files for the ls process — as explained earlier in this column. But, since ls inherits the open files from the process that started it — in this case, the open files from the shell that ran ls — what you see are the shell’s open files plus any other files that ls might have opened.

The standard input, output, and error all point to /dev/pts/5, which is our current terminal device — as tty confirms. So, another way to write to the standard error of your current process — instead of using the Bourne shells’ operator 1>&2 — is by writing to /dev/stderr. This is a great help to C-shell scripts, since they don’t have an easy way to write arbitrary text to the standard error (which is where error messages should be written):

echo an error > /dev/stderr

File descriptor 3 is also open in this process; it points to the fd subdirectory of process 26055. As we said, it’s for bash or ls.

This leads to a nice technique for exploring how open files are used in a shell: by listing /proc/self/fd after you change the shell’s open files.

Fiddling with file decriptors

When you experiment with file descriptors, it may be best to do from a shell script, or from an interactive subshell. That way, if you do something you didn’t mean to do (such as redirecting the standard output to a file, so you can’t see the outputs of commands), it’s easy to put things back to normal: simply terminate the subshell. Because changes to a child process don’t affect its parent process, the parent shell retains its original standard input and output after the subshell exits.

Let’s start a child bash shell. When we’re done playing — or, if something goes wrong — we can get back to a sane state by typing CTRL-D or exit to terminate the child shell. We’ll set the shell prompt to sub$ as a reminder that this is a subshell. To save typing, we’ll store a temporary filename in an environment variable with the arbitrary name T. (Environment variables are copied to child processes.) We’ll also make an alias that lists /proc/self/fd.

Listing Three shows some examples. (Try them yourself!) To avoid confusion here, we’ll omit listings for file descriptors that bash and ls may open.

Listing Three: Watching open files in /proc/self/fd

$ export T=/tmp/myfile
$ bash
$ PS1='sub$ '
sub$ alias ck='ls -l /proc/self/fd'
sub$ ck
total 0
lrwxrwxrwx ... 0 -> /dev/pts/5
lrwxrwxrwx ... 1 -> /dev/pts/5
lrwxrwxrwx ... 2 -> /dev/pts/5
sub$ exec 3> $T
sub$ ck
total 0
lrwxrwxrwx ... 0 -> /dev/pts/5
lrwxrwxrwx ... 1 -> /dev/pts/5
lrwxrwxrwx ... 2 -> /dev/pts/5
lrwxrwxrwx ... 3 -> /tmp/myfile
sub$ echo a test message 1>&3
sub$ cat $T
a test message
sub$ cat /proc/self/fd/3
a test message
sub$ ls garbage
ls: cannot access garbage
sub$ ls garbage 2>&3
sub$ cat $T
a test message
ls: cannot access garbage
sub$ exit
$ rm $T; unset T

Here’s what we do:

  • After defining the ck alias and running it, we can see the usual three standard I/O file descriptors.
  • Running exec 3> $T opens the file /tmp/myfile for writing and associates file descriptor 3 with it.
  • The shell operator 1>&3 makes the standard output of echo (file descriptor 1) go to file descriptor 3 — which is the file in /tmp. We write three words there.
  • Reading the file with cat $T shows the words we wrote there.
  • As an example that’s somewhat opaque but also illustrative, cat /proc/self/fd/3 does the same thing! (Although the file /tmp/myfile is only open for writing from the shell, don’t let that confuse you. /proc/self/fd/3 is just a symbolic link pointing to the file that was opened by the shell. The command cat /proc/self/fd/3 is completely independent of the shell; cat is simply reading a file in the filesystem — which it finds via the symbolic link at /proc/self/fd/3.)
  • We run ls garbage to generate an error message on the standard output. Then we re-run the command with the operator 2>&3, which sends standard error (fd 2) to the file in /tmp via fd 3. Running cat shows the two lines in /tmp/myfile.This illustrates another important reason to use open files and file descriptors instead of constantly re-opening a file from a script: the file stays open, and you can keep adding text to it, until you close the open file or end the shell process that’s holding the file open.
  • We end the shell subprocess with exit. That automatically closes the open file /tmp/myfile. Then we remove the file and the environment variable that held its name.


There’s a lot more to see in /proc, and a lot to learn from experimenting with /proc/self. Until next time, try exploring and see what you find.

taken from :


Open Solaris: What Ubuntu want to be?

What would Ubuntu be like if it were an OS for grown-ups?

OpenSolaris 2008.05 Release “Project Indiana”

This week at its CommunityOne event in San Francisco, Sun will release its May 2008 build of OpenSolaris (2008.05) the Open Source operating system based on the source code of the Solaris 10 enterprise UNIX OS, the first to be designated with “Production” support offerings. While very much community software and not yet at the level of polish for end-user adoption that many of the latest Linux distributions are now enjoying — shows promise and enormous potential as an enterprise-class UNIX desktop and server with an Ubuntu-like flavor.

(See screenshot gallery of OpenSolaris 2008.05 Release installation and UI.)

Also: Commercial Open Solaris Ships (Paula Rooney)

Founded as an Open Source project by Sun Microsystems in June of 2005, and originally created as a clearing house for releasing CDDL licensed Solaris code for others (such as Nexenta and Sine Nomine) to produce Solaris-compatible operating systems, OpenSolaris recently refocused its efforts in the last year and launched Project Indiana, Sun’s equivalent to Red Hat’s Fedora or Novell’s OpenSUSE — where leading and bleeding edge enhancements to Solaris 10 can be tested and proofed by the Open Source community at large. To give Indiana some legitimacy, Sun hired Debian GNU/Linux founder Ian Murdock to lead the project, in the hopes that his Linux roots and community ties would improve OpenSolaris adoption.

Open Source UNIX x86-compatible operating systems are nothing new. The various BSD OSes have had a loyal but niche following for years. FreeBSD , NetBSD and OpenBSD are the major derivatives. Not surprisingly, ideological differences and personality clashes between FreeBSD’s founders and contributors have created fractionalization and compatibility issues between the various BSDs, which has confused the landscape and limited BSD’s adoption. To further complicate matters, Apple has even released the source code of Mac OS X’s BSD-based UNIX core as the “Darwin” project and an installable distribution for Darwin even exists as GNU Darwin.

Despite a loyal following among research academia, vertical systems integrators and some Internet service providers, the BSDs never really caught with end users like Linux has. To further add to BSD’s woe, no BSD-based OS has made significant inroads into the enterprise – only the System V based UNIX OSes, such as Sun’s Solaris, IBM’s AIX and HP’s HP-UX now occupy that coveted mid-range and high-end space. Before pursuing its litigious path of self-destruction, even SCO’s UnixWare and OpenServer System V OSes for x86 had some decent vertical penetration into the retail industry. And before they abandoned their native IRIX System V platform for Linux, SGI also had a large toehold in the supercomputing and CGI industry.

Still, OpenSolaris is the first and only System V-based UNIX to have been released into Open Source. However, it uses the CDDL license, a MPL-derivative which is incompatible with the GNU GPLv2 license that Linux uses. This has prevented Solaris source code from co-mingling with Linux, and has also set up a virtual “Mirror Mirror” universe of OpenSolaris developers that don’t really cooperate with the general Linux population at large. As a result, porting and packaging efforts of major Open Source projects and software to Solaris have been relatively slow when compared to the many releases and fast adoption of the various Linux distributions. However, there has been some recent indication that Sun might release Solaris into GPLv3, which would cause a watershed of activity on the platform, as many packages and projects which run on Linux distributions are going in that direction as well. While somewhat wishful thinking but not completely out of the question, a GPLv2 release of Solaris would eventually bring about true “Unixfication” of the two platforms.

OpenSolaris 2008.05All this history aside, I’m very impressed with the OpenSolaris 2008.05 release — clearly, Ubuntu’s success has rubbed off on the OpenSolaris crowd, and thus it has adopted a lot of that Linux distribution’s look and feel. End-users for the most part should feel right at home with OpenSolaris, with its up-to-date GNOME 2.22 interface, the very same that powers Ubuntu Hardy Heron’s. The installation system boots as a Live CD, just like Ubuntu, and installs with only a few mouse clicks. Many new configuration applets and end-user programs have been added, making Solaris a much more “livable” environment than its big brother, Solaris 10. Firefox, the most current and stable version has been pre-installed and is even capable of running sites that use Adobe Flash. I had no problems with videos on YouTube and Google Video, or manipulating photos on Picnik or Adobe Photoshop Express. Battlestar Galactica replays on the Sci-Fi channel rewind website ran just fine too.

I did have some issues, however, getting Adobe’s Acrobat reader installed, as they haven’t built an x86 Solaris version yet — only for SPARC. OpenSolaris provides an Open Source alternative to Acrobat in the form of evince. My suggested solution to the SPARC to x86 problem — one which is going to plague Solaris x86 for some time until all of this package stuff is rationalized — is that Sun should bulk license Transitive’s QuickTransit software, from the guys who built the PowerPC to x86 “Rosetta” compatibility layer for Mac OS X. In fact, I’d get them to quick port an Ubuntu Linux to Solaris X86 version for OpenSolaris and have that installed as well.

Beauty is not only skin-deep. OpenSolaris employs the very same enterprise-proven high-performance Solaris 10 kernel that powers the biggest and baddest Sun boxes, and has the stability and monolithic scalability to match, something that commodity Linux desktops and servers — while far more stable and sprightly than Windows OSes — lack in comparison. In addition to the Solaris 10 kernel, OpenSolaris makes use of Sun’s advanced 128-bit Zetabyte File System or ZFS, which permits “pooling” of storage on networked Solaris-based systems, as well as Solaris 10’s native “containers” for OS-based high performance virtualization. Like its Linux cousins, OpenSolaris and Solaris 10 is also Xen-hypervisor enabled as both a virtualization domain and guest.

As a separate free download, Sun also provides VirtualBox (which was recently acquired as a result of the Innotek purchase) as host-based virtualization for Linux and Windows compatibility, similar to VMWare’s Workstation 6.

With all these advanced enterprise UNIX features though, OpenSolaris still isn’t quite as polished as its Linux cousins. For example, to get something as simple as SAMBA working, it requires creating a ZFS storage pool in the command line interface and executing a bunch of Solarisy-mumbo jumbo in addition to downloading SAMBA thru the OpenSolaris package manager, IPS (IPS is similar to other network aware package managers such as Debian’s and Ubuntu’s aptitude, or Fedora’s YUM). On Ubuntu or any other Linux distribution, this is as simple as making an edit to /etc/samba/smb.conf and restarting the /etc/init.d/samba daemon. This is even easier to with most Linux-based configuration GUI’s where you don’t even need to touch the command line to make basic stuff work.

Additionally a lack of compiled packages when compared to Linux can also can make for a frustrating experience. While IPS is an excellent system and the Package Manager GUI on OpenSolaris is workable (although I would have preferred they ported the Debian/Ubuntu package GUI, Synaptic, instead of reinventing the wheel with a 1.0 flaky interface) and the pkg command itself is pretty robust — the main OpenSolaris repository only has about 1200 unique packages on it, which is a pittance compared to what is available for Ubuntu, OpenSUSE or Fedora. While 3rd-party IPS repositories such as Sunfreeware and BlastWave are sprouting up, it will take a long time for OpenSolaris to gain comparable inertia and an end-user following until the system is at package parity with popular Linux distributions.

Nevertheless, OpenSolaris 2008.05 is a major milestone release for the project and their efforts should be commended. I’ve upgraded one of my servers to the system and I look forward to tracking further bi-annual milestone releases of the fledgling Open Source OS.

What’s your take on OpenSolaris? Talk Back and let me know.

by Jason Perlow from :