28
Jul
08

Some Open-Source Advocates Find Google’s Android a Sinister Threat

Android, Google’s new mobile software platform based on the Linux kernel, is scheduled to be released in early 2008 under an open-source license. But hope for the new platform is mingled with worries that it won’t be as free and open as the initial publicity surrounding the release strenuously implies.

“I wouldn’t bother with this,” says Bruce Perens, a professional evangelist of open-source software. “It’s so easy to find a project that is 100 percent open source right now to work on, or indeed to create one. That way, I’d be a little more sure that my work wouldn’t be locked up with proprietary stuff forever.”

Android’s release last week was initially greeted as a breath of fresh air by those hoping to inject more freedom into the smartphone industry, which is currently saddled by various restrictions. Lockdowns on hardware functionality, demanded by service providers and enforced by the manufacturers, have resulted in a marketplace filled with crippled devices that are only minimally configurable or expandable.

However, the announcement that Android would be released under a software license which allows for some restrictions to remain in place, albeit in a more limited way, has given many pause.

The Android software platform will be licensed not under the GPL, the license that covers Linux and GNU software, but under the Apache License, which does not include the GPL’s restriction on closed modifications.

According to the Android FAQ page, “The Apache license allows manufacturers and mobile operators to innovate using the platform without the requirement to contribute those innovations back to the open-source community.” The page promises that “industry players can add proprietary functionality to their products based on Android without needing to contribute anything back to the platform,” and, to be sure, “companies can remove functionality if they choose.”

Those restrictions, plus the licensing of the preliminary developer tools, have raised red flags for some potential developers.

“What happened to the whole ‘full stack’ and ‘open-source’ thing,” software developer Robilad asks on his blog, referring to the language used by Google in Android’s announcement. “Let’s just hope Google gets around to releasing the actual Android code under an open-source license before 2017.”

Google’s wording doesn’t give a clear impression about who’s going to reap the benefits of Android. With one hand, Android offers (in a video on the project’s website) “the ability to have your cellphone do whatever the heck you want it to do” while the other hand panders to “industry players” who may want to curtail that user experience.

GPL-licensed code makes no such compromises.

“Anyone is free to use, change or improve our code,” explains Steven Mosher, vice president of marketing for OpenMoko, another Linux-based mobile platform which now finds itself in competition with Android.

“They owe us nothing,” Mosher says of smartphone manufacturers using OpenMoko. “Our only request is this: They owe other people the same rights we gave them. We give you the code for free. If you change it or improve it, you must give your work back to the common good.”

According to Mosher, members of the open-source community are concerned that, by choosing the Apache License, Android is “using ‘open source’, but cynically neglecting its principles.”

Google isn’t terribly worried about the licensing controversy. “We’ve already seen tremendous developer interest in the Android SDK, with downloads surpassing all others on code.google.com,” says a spokesperson.

Hal Steger, vice president of marketing for Funambol, an open-source messaging software project, accordingly warns that “Google’s choice to go with the Apache License will likely result in some developers sticking with the OpenMoko-type approach.”

Clearly, in order to launch a successful mobile platform, it’s necessary to woo the powers that be. Even the combined might of Google and the open-source community won’t soon overthrow the iron whim of the cellular carriers. Some open-source advocates, though, do see potential in Google’s power to broaden the market.

Linux Foundation executive director Jim Zemlin finds one thing especially promising: the challenge Google has brought against the status quo, closed-source software model for handhelds currently dominated by Microsoft’s Windows Mobile and Symbian (the software platform jointly owned by Nokia, Sony Ericsson, Siemens and other handset makers).

“Google is proliferating the use of the Linux kernel as the standard for mobile devices,” says Zemlin. “Similar to the server operating environment, the world will likely end up with two camps: Linux-based phones on one side with Microsoft and Symbian on the other. My guess is Microsoft and Symbian will continue to lag due to the lack of agility from their proprietary development models. It’s difficult for them to compete with open-source licenses, no matter which specific one.”

by Paul Adams on Wired.com

28
Jul
08

An Interview with Mozilla CEO

John Lilly - Mozilla CEO

John Lilly - Mozilla CEO

When Mozilla released the Firefox browser in 2004, Microsoft’s Internet Explorer dominated the market with a whopping 95 percent share. Now Firefox has 18 percent of the market and Apple’s Safari has another 6 percent. Along the way, Wall Streeters began pressing Mozilla to go public (it won’t) and Mozilla CEO John Lilly wowed scores of suits with his talks about how the open source project became a successful business. Just before the launch of Firefox 3 in June, Wired sat down with Lilly at his company headquarters in Mountain View, California.

Wired: What are the biggest changes in Firefox 3?

Lilly: It’s got 15,000 improvements. It’s more secure and easier to use. But, most important, it’s two or three times faster. Think about all the programs we run in our browser now — like office software. When Firefox 2 was developed three years ago, we ran those applications on our desktop. So in Firefox 3 we improved the JavaScript engine and changed the way the browser handles and allocates memory.

Wired: Why did Firefox catch on in the first place, and how has it stolen users from Microsoft’s Internet Explorer?

Lilly: When Firefox came out in 2004, there wasn’t much browser innovation happening at Microsoft. People used Firefox, saw it was really fast and liked the tabs, and stayed.

Also, people now understand what we stand for — the participatory and open Web — and they like that. It’s why we launched Firefox 3 in more than 45 languages. The idea that people worldwide can feel a sense of ownership about software that’s initially only in English — like IE7 — is bogus.

Wired: That’s nice, but it’s not exactly a long-term strategic plan. Do you worry about competition from Apple now that it has enabled Safari on Windows?

Lilly: I used to work at Apple. I have an iPhone. But there are other ways of developing software. Instead of relying on individual brilliance, we rely on enabling a network around the world, like Wikipedia does. That’s a different aesthetic.

Wired: Is it an aesthetic or a rationalization for not producing well-designed products?

Lilly: It’s an aesthetic. Apple is great if you like the way it comes. Firefox is great if you like to customize things. The focus is on how it lets you do what you want, not how it looks.

Wired: Roughly 85 percent of your revenue comes from Google. What happens if Google decides to build its own browser?

Lilly: It’s kind of a sucker’s game to speculate about what Google’s going to do. That said, it was the Google guys who approached us — not the other way around — because Firefox was a good browser. Our relationship will be just fine, as long as we build something that people give a damn about.

Wired: Mozilla is a nonprofit foundation but also a for-profit startup. How does that work?

Lilly: We’re like a university. We have a public mission — keeping the Web open — that we’re supporting with economics. It’s just that our competitors are all for-profit companies.

Wired: Does the browser still matter now that users access the Net with different, non-browser- dependent devices, like Amazon.com’s Kindle?

Lilly: That’s a bogus argument. People have been saying for 10 or 15 years that the PC is dead. Even with a good mobile device, I’ll sit at my laptop when I’m near it because it’s a better experience.

Wired: But still an imperfect one.

Lilly: There are huge problems left to solve. If your data is in the cloud, how do you access it when you’re offline? How do you display video without using proprietary technologies? And then there’s the whole mobile Web; I think it’s not at all clear that it will look like the actual Web.

Wired: Are you going to develop a version of Firefox for the iPhone?

Lilly: No. Apple makes it too hard. They say it’s because of technical issues — they don’t want outsiders to disrupt the user experience. That’s a business argument masquerading as a technological argument. We’re focusing on more important stuff. The iPhone has been influential, but there’s not that many of them. We’re part of the LiMo Foundation — Linux on Mobile. The Razr V2 is a LiMo phone, and you’ll see more in the next year or so.

by Fred Vogelstein on Wired.com

28
Jul
08

Google’s “Cuil” new competitor: not so cool for Google?

Cuil, pronounced “cool”, is a brand new search engine claiming an index of more than 120 billion pages, and founded by former Google employees that were search engine stars to begin with. How cool is Cuil, and will it survive a duel with the big G?

Hear ye, hear ye: a new search engine has come to town, and the buzz is that it has the biggest and best chance yet of truly challenging Google, the first time any new search engine has really had that kind of buzz.

Available at cuil.com, the site tells us in its “about us” section that Cuil is an old Irish word for knowledge, and that if you want knowledge, “ask Cuil”.

The founders of Cuil were former Google employees and were big search stars before Google came along, and it’s their pedigree both with Google and before it that have the pundits thinking that Cuil has every chance of truly being cool.

Billed as “the world’s biggest search engine”, with 180 billion pages spidered and over 120 billion of those included in the actual Cuil index, the founders claim that “the Internet has grown” and that they think “it’s time search did too.”
That 120 billion-plus page figure is claimed to be more than three times what Google indexes and ten times the number of pages Google searches, although Google recently did put out a release saying it has more than 1 trillion links in its database, although links are not necessarily individual pages.

Cuil also takes a major dig at Google’s ultra successful and ultra popular “Pagerank” concept, an idea which gave Google its undeniable accuracy that everyone else, so far, has tried so hard to beat.

Cuil’s attack on “Pagerank” is evident when it says in its “about us” page: “Rather than rely on superficial popularity metrics, Cuil searches for and ranks pages based on their content and relevance. When we find a page with your keywords, we stay on that page and analyze the rest of its content, its concepts, their inter-relationships and the page’s coherency.”

What Cuil says it does next: “Then we offer you helpful choices and suggestions until you find the page you want and that you know is out there. We believe that analyzing the Web rather than our users is a more useful approach, so we don’t collect data about you and your habits, lest we are tempted to peek. With Cuil, your search history is always private.”

According to Cuil’s press release, the site “provides organized and relevant results based on Web page content analysis”, which goes beyond link analysis and goes into deeper page analysis, which is then grouped and sorted by category.

According to Cuil, this gives you better results, with “tabs” breaking up information into further searchable categories, images to identify topics, and “search refining suggestions” to guide you to better answers.

Ultimately, of course, consumers will need to try it out and see if they get better and more accurate results than they currently get with Google.

My own quick attempts at searching with Cuil in researching for this storage showed that it did bring back relevant results, and definitely presents information in what appears to be a more organised and more graphical manner.

It’s clearly early days, but I’ll definitely be searching with Cuil again just to see what it comes up with. So far, it’s dramatically more impressive than any other Google competitor I’ve ever seen, and makes me wonder why Microsoft is stuffing around with trying to buy Yahoo! when they could be throwing some of that $50 billion towards Cuil.

Tom Costello, CEO and co-founder of Cuil said in the press release that: “The Web continues to grow at a fantastic rate and other search engines are unable to keep up with it.”

Costello continued: “Our significant breakthroughs in search technology have enabled us to index much more of the Internet, placing nearly the entire Web at the fingertips of every user. In addition, Cuil presents searchers with content-based results, not just popular ones, providing different and more insightful answers that illustrate the vastness and the variety of the Web.”
So, who are the people behind the Cuil search engine?

Well, it’s probably best to let the Cuil press release explain it:

“Cuil’s technology was developed by a team with extensive history in search. The company is led by husband-and-wife team Tom Costello and Anna Patterson.

“Mr. Costello researched and developed search engines at Stanford University and IBM; Ms. Patterson is best known for her work at Google, where she was the architect of the company’s large search index and led a Web page ranking team.

“They refused to accept the limitations of current search technology and dedicated themselves to building a more comprehensive search engine.

“Together with Russell Power, Anna’s former colleague from Google, they founded Cuil to give users the opportunity to explore the Internet more fully and discover its true potential.”

Anna Patterson, the President and COO of Cuil explained further: “Since we met at Stanford, Tom and I have shared a vision of the ideal search engine. Our team approaches search differently.”

Patterson continues: “By leveraging our expertise in search architecture and relevance methods, we’ve built a more efficient yet richer search engine from the ground up. The Internet has grown and we think it’s time search did too.”

Cuil also promises to “guarantee online privacy for searchers”, explaining that they “rank pages based on content instead of number of clicks”, something that makes “personal data collection unnecessary”, rendering “personal search history always private.”

Cuil list some interesting information on their philosophy, a quick demo of their features and a fascinating 11 question FAQ, which also explains that “Twiceler” is Cuil’s “web crawler”, which it says webmasters should be aware of.

All in all, it looks like the most exciting new search engine so far, and if it truly is any good, will give Google the impetus is needs to itself take its own search capabilities into the next dimension – being the biggest and best for too long with no true competition is no good for anyone!

by Alex Zaharov-Reutt on ITWire.com

28
Jul
08

Lesson From the DNS Bug: Patching Isn’t Enough

Despite the best efforts of the security community, the details of a critical internet vulnerability discovered by Dan Kaminsky about six months ago have leaked. Hackers are racing to produce exploit code, and network operators who haven’t already patched the hole are scrambling to catch up. The whole mess is a good illustration of the problems with researching and disclosing flaws like this.

The details of the vulnerability aren’t important, but basically it’s a form of DNS cache poisoning. The DNS system is what translates domain names people understand, like http://www.schneier.com, to IP addresses computers understand: 204.11.246.1. There is a whole family of vulnerabilities where the DNS system on your computer is fooled into thinking that the IP address for http://www.badsite.com is really the IP address for http://www.goodsite.com — there’s no way for you to tell the difference — and that allows the criminals at http://www.badsite.com to trick you into doing all sorts of things, like giving up your bank account details. Kaminsky discovered a particularly nasty variant of this cache-poisoning attack.

Here’s the way the timeline was supposed to work: Kaminsky discovered the vulnerability about six months ago, and quietly worked with vendors to patch it. (There’s a fairly straightforward fix, although the implementation nuances are complicated.) Of course, this meant describing the vulnerability to them; why would companies like Microsoft and Cisco believe him otherwise? On July 8, he held a press conference to announce the vulnerability — but not the details — and reveal that a patch was available from a long list of vendors. We would all have a month to patch, and Kaminsky would release details of the vulnerability at the BlackHat conference early next month.

Of course, the details leaked. How isn’t important; it could have leaked a zillion different ways. Too many people knew about it for it to remain secret. Others who knew the general idea were too smart not to speculate on the details. I’m kind of amazed the details remained secret for this long; undoubtedly it had leaked into the underground community before the public leak two days ago. So now everyone who back-burnered the problem is rushing to patch, while the hacker community is racing to produce working exploits.

What’s the moral here? It’s easy to condemn Kaminsky: If he had shut up about the problem, we wouldn’t be in this mess. But that’s just wrong. Kaminsky found the vulnerability by accident. There’s no reason to believe he was the first one to find it, and it’s ridiculous to believe he would be the last. Don’t shoot the messenger. The problem is with the DNS protocol; it’s insecure.

The real lesson is that the patch treadmill doesn’t work, and it hasn’t for years. This cycle of finding security holes and rushing to patch them before the bad guys exploit those vulnerabilities is expensive, inefficient and incomplete. We need to design security into our systems right from the beginning. We need assurance. We need security engineers involved in system design. This process won’t prevent every vulnerability, but it’s much more secure — and cheaper — than the patch treadmill we’re all on now.

What a security engineer brings to the problem is a particular mindset. He thinks about systems from a security perspective. It’s not that he discovers all possible attacks before the bad guys do; it’s more that he anticipates potential types of attacks, and defends against them even if he doesn’t know their details. I see this all the time in good cryptographic designs. It’s over-engineering based on intuition, but if the security engineer has good intuition, it generally works.

Kaminsky’s vulnerability is a perfect example of this. Years ago, cryptographer Daniel J. Bernstein looked at DNS security and decided that Source Port Randomization was a smart design choice. That’s exactly the work-around being rolled out now following Kaminsky’s discovery. Bernstein didn’t discover Kaminsky’s attack; instead, he saw a general class of attacks and realized that this enhancement could protect against them. Consequently, the DNS program he wrote in 2000, djbdns, doesn’t need to be patched; it’s already immune to Kaminsky’s attack.

That’s what a good design looks like. It’s not just secure against known attacks; it’s also secure against unknown attacks. We need more of this, not just on the internet but in voting machines, ID cards, transportation payment cards … everywhere. Stop assuming that systems are secure unless demonstrated insecure; start assuming that systems are insecure unless designed securely.

by Bruce Schneier on Wired.com

28
Jul
08

Veteran developer ditches Microsoft for open source

If you’ve ever used Microsoft Access or Excel, you have likely used a product that Mike Gunderloy had a hand in developing. The irony is that Gunderloy himself doesn’t use those products anymore. He’s given up Microsoft for open source — and he’s not going back.

Gunderloy, an Evansville, Ind.-based freelance developer for the past quarter century, goes way back with Microsoft. “I was never a full-time employee, but have several times been a contractor with a badge and [Redmond] campus access,” he says.

His contracting work — on the order of half a million dollars, Gunderloy estimates — led to a substantial amount of code contributed to the Access and Excel versions of Microsoft Office 97 and 2000. He’s also worked on other, more obscure parts of the Microsoft software empire, including SQL Server, C#, and ASP.Net.

It was good work that paid well. But over the last several years, changes crept in that began to bother Gunderloy. “I saw Office 2007 really, really early — alpha code. I gave feedback on parts of the code I was less than satisfied with. It was pretty clear my feedback and that of others was pretty much ignored. That was different from [my experiences with] Office 97, 2000, and 2003. It seems the Office team felt they didn’t need any outside” opinions, Gunderloy recalls.

But those annoyances were merely a precursor to what was to come. The beginning of the end for the developer was when Microsoft went patent berserk. “What finally pushed me over the edge to ‘I’m getting out’ was when Microsoft started to assert non-intellectual property rights over the its Ribbon interface, making that level of sweeping intellectual property claims. Microsoft went from not patenting much to patenting everything,” Gunderloy says.

Microsoft essentially tried to patent the new Ribbon interface that appeared on Office 2007 products. The Ribbon is a series of controls for various functions of Office programs.

Basically told any control vendor that wanted to make a control that the Ribbon was Microsoft property and they had to license it from Microsoft. They had to acknowledge that Microsoft owns that piece of the user interface. I said to myself, that’s nuts. You may have copyright rights in code, but the arrangement of controls in the user interface is not something that’s intellectual property.

If that happened, Gunderloy reasoned, it could become impossible for a developer to write any code that didn’t tread on some vendor’s patent somewhere. “It was the sweeping land grab by Microsoft that pissed me off.”

Add to that Microsoft’s infamous May 2007 claim that Linux and other open source software infringed on 235 Microsoft patents, and Gunderloy had seen enough. He broke with Microsoft and started looking around for new languages to learn. He knew he wanted to keep in the Web development realm, so he checked out open source languages like the Python-based Django and Ruby on Rails. He settled on RoR because he saw more opportunity to get paid to develop on that platform.

Gunderloy’s disgust at Microsoft spilled into areas beyond the development platform; his work environment, he says, is now “100% Microsoft-free.” He bought a Mac, which he says is more reliable than his Windows boxes. He runs both OpenOffice.org and NeoOffice, and uses iWork a lot.

Gunderloy has been using RoR since late 2006. He says the biggest difference between ASP.Net and RoR is that “now, I’m a whole lot closer to the code. With the Microsoft tool chain, it was about the IDE (integrated development environment), and visual drag-and-drop. I’ve gone back to the way I used to develop 10 years ago, with text editors. Now I’m just writing code instead of moving stuff around. It’s easier to look at the code and know what’s going on.”

The drawback, he says, is that it’s harder to develop fancy interfaces. On the whole, though, Gunderloy sees more advantage to the open source way of doing things. “Free is a pretty powerful argument. With ASP, to build a database, you had to consider what it would cost to build a fully licensed SQL Server. [RoR development is] much cheaper. The other thing is if you look at the ferment going on in Web development, an awful lot of the most visible properties are being built on Rails or Django or plain old PHP.”

This trend away from Microsoft, according to Gunderloy, is likely to continue. “I don’t think we’ve seen the high-water mark of Microsoft being replaced yet. If you look at the [open source development] numbers, Firefox, the [Python-based] Google application engine, all those things are trending away from Microsoft.”

The switch has cost Gunderloy money. “I ended up cutting my hourly rate for development. I could command a higher rate [for Microsoft-related development], as someone who’d worked with Microsoft for as long as I had. The levels of compensation in the Rails development community will not reach the highest levels in Microsoft community.”

Still, he’s not complaining, and he believes the tradeoff is more than worth it. By bucking Microsoft for open source, says Gunderloy, “I’m no longer contributing to the eventual death of programming.”

by Keith Ward on Linux.com

28
Jul
08

Picasa for Linux

Picasa is a free software download from Google. Version 2.7 now available for Linux!

Picasa is software that helps you instantly find, edit and share all the pictures on your computer. Every time you open Picasa, it automatically locates all your pictures (even ones you forgot you had) and sorts them into visual albums organized by date with folder names you will recognize. You can drag and drop to arrange your albums and make labels to create new groups. Picasa makes sure your pictures are always organized.

Picasa also makes advanced editing simple by putting one-click fixes and powerful effects at your fingertips. And Picasa makes it a snap to share your pictures – you can email, post to your blog, and upload/download via Picasa Web Albums.

Download Picasa 2.7 for Linux

28
Jul
08

Google Gadgets for Linux – almost there

Since version 2 came out in 2005, Google Desktop for Windows has included a sidebar that users can fill with screen gadgets, but the Linux version (version 1, from June 2007) provided only indexing and search functions, with no eye candy whatsoever. This has finally changed. Google recently released Google Gadgets for Linux (GGL), which closes the gap between the operating systems. With GGL, you can run as many gadgets as you wish on your screen — or at least that’s the idea. Some flaws still need to be fixed, and not everything works 100% correctly.

GGL resembles SuperKaramba, Screenlets, gDesklets, and KDE 4’s Plasma. All produce similar results and offer similar gadgets, and the only reason to choose one over another is if it offers unique gadgets you’re particularly interested in. Some programs are compatible with each other, allowing you to run gadgets from other programs; there’s even talk that Plasma will be able to run GGL gadgets directly.

License and installation

GGL is licensed under the Apache License 2.0. It is currently in version 0.10.0 and qualifies as “development” software, so expect bugs. If you happen to find some quirky behavior or errors, you can help by posting about your issues on the Google Groups user forum. GGL developers visit this forum, so you should get an answer and get the ball rolling to fix some of the remaining bugs.

Installing GGL can be difficult. If you’re up for solving lots of dependencies by hand (by installing many packages), try getting the source code and following the building instructions to build it from scratch. However, be prepared to work a while over it. Debian, Fedora, Mandriva, and Ubuntu users might be in luck, as the project provides specific instructions for those distributions. I am mainly an openSUSE 10.3 user, but after fruitlessly trying to get all the needed packages (I still don’t know why the build process claimed I was lacking certain libraries, which I’m sure I already had), I opted to “1-click install” an already built package provided by the openSUSE Build Service. At first I installed google-gadgets-qt (for KDE), but later I revised my decision and opted for google-gadgets-gtk (for GNOME); more on this in a moment.

Mandriva users can also get an already built package from contrib/backports. I tried that, but GGL wouldn’t connect to the server and download any gadgets. Some searching on the Internet provided the solution: although it isn’t required, you must have the curl and openssl packages installed. Also, be sure to have the Flash plugin for Firefox, or you won’t be able to use many gadgets that depend on it. Finally, check that your /etc/X11/xorg.conf file includes the following lines, or gadgets won’t have transparent backgrounds:

Section "Extensions" Option "Composite" "Enable" EndSection

Getting started

Depending on which version you get, you must run either ggl-qt or ggl-gtk. To learn about the available options, run ggl-gtk -h; for no obvious reason, ggl-qt -h won’t produce the same output, though it seemingly recognizes the same options:

Google Gadgets for Linux 0.9.3 Usage: ggl-gtk [Options] [Gadgets] Options: -z zoom Specify initial zoom factor for View, no effect for sidebar. -b Draw window border for Main View. -ns Use dashboard mode instead of sidebar mode. -bg Run in background. -h, --help Print this message and exit. Gadgets: Can specify one or more Desktop Gadget paths. If any gadgets are specified, they will be installed by using GadgetManager.

I tried using the -z option, but couldn’t guess what value it was expecting; for example, -z 10 (10%, I hoped) completely filled my screen with a sidebar. The -b option isn’t too interesting either: it causes gadgets to get window decorations, making them look plain awful. The options you will certainly want to use are -ns (so the semitransparent black sidebar won’t appear) and -bg (so GGL will run in the background). If you want GGL to run every time you boot, you must include the ggl-gtk -ns -bg command in the startup file for your distribution. For example, under KDE in openSUSE, you would include a script with that command line in the $HOME/.kde/Autostart directory.

After GGL starts, a little icon appears in the system tray. (Another problem with ggl-qt is that the icon background is colored instead of transparent. Sometimes it shows up as black, and other times as white or red, so maybe it’s an initialization problem.) Right-click on the icon to get a menu that allows you to add gadgets, show or hide all gadgets, or exit.

The first option opens a window with several categories of gadgets, a search box, and about 600 gadgets to pick from; if you wish, you can go through the whole set, page by page. It features a category of Google-produced gadgets, as well as separate categories including News, Sports, Lifestyle (a catch-all term that encompasses all sorts of things, from horoscopes to history data to health advice), Tools, Technology, Communications, Finance, Fun and Games, and Holidays. Each gadget shows a button below; click on it, and the gadget is added to your collection.

GTK gadgets appear on all desktops; ggl-qt gadgets show up on only one desktop. If you get tired of them, click on the systray icon to quickly hide them all; another click makes them reappear. You can resize gadgets, but you have to look for the (invisible) resizing handle at the base, to the right of the gadget. Right-click on an icon to get a menu, including a Zoom option that offers predefined sizes from 50% to 200%, and an Autofit option that’s more suitable if you opt to show the black sidebar. Some gadgets also sport an Options menu entry; for example, the Weather Globe, which displays the weather at any place around the world, lets you pick a country and city. Finally, moving your mouse over a gadget reveals a button that allows you to close the gadget.

Many gadgets are not written by Google, and some may not be up to the latest standards or level of testing. Some gadgets fail to even load; for example, the Digital Retro Clock sometimes loads and sometimes doesn’t. Others produce wrong results (the Battery Meter always shows 0% charge left, even on a connected machine), and some are even worse (Spider makes my X Window System session crash).

Conclusion

GGL is an attractive package with several hundred available gadgets, but it should still be considered an alpha or beta release. The gadgets that are usable might make it worth installing, but be ready to be disappointed, because they don’t all install or run correctly. My experiences with two distributions (openSUSE and Mandriva) showed different results; not all things worked on both of them, and gadgets sometimes failed on one or the other.

by Federico Kereki from : Linux.com



Follow

Get every new post delivered to your Inbox.