The Grid: The Next-Gen Internet?

The Grid: The Next-Gen Internet?
The Matrix may be the future of virtual reality, but researchers say the Grid is the future of collaborative problem-solving.

More than 400 scientists gathered at the Global Grid Forum this week to discuss what may be the Internet’s next evolutionary step.

Though distributed computing evokes associations with populist initiatives like SETI@home, where individuals donate their spare computing power to worthy projects, the Grid will link PCs to each other and the scientific community like never before.

The Grid will not only enable sharing of documents and MP3 files, but also connect PCs with sensors, telescopes and tidal-wave simulators.

IBM’s Brian Carpenter suggested “computing will become a utility just like any other utility.”

Carpenter said, “The Grid will open up … storage and transaction power in the same way that the Web opened up content.” And just as the Internet connects various public and private networks, Cisco Systems’ Bob Aiken said, “you’re going to have multiple grids, multiple sets of middleware that people are going to choose from to satisfy their applications.”

As conference moderator Walter Hoogland suggested, “The World Wide Web gave us a taste, but the Grid gives a vision of an ICT (Information and Communication Technology)-enabled world.”

Though the task of standardizing everything from system templates to the definitions of various resources is a mammoth one, the GGF can look to the early days of the Web for guidance. The Grid that organizers are building is a new kind of Internet, only this time with the creators having a better knowledge of where the bottlenecks and teething problems will be.

The general consensus at the event was that although technical issues abound, the thorniest issues will involve social and political dimensions, for example how to facilitate sharing between strangers where there is no history of trust.

Amsterdam seemed a logical choice for the first Global Grid Forum because not only is it the world’s most densely cabled city, it was also home to the Internet Engineering Task Force’s first international gathering in 1993. The IETF has served as a model for many of the GGF’s activities: protocols, policy issues, and exchanging experiences.

The Grid Forum, a U.S.-based organization combined with eGrid – the European Grid Forum, and Asian counterparts to create the Global Grid Forum (GGF) in November, 2000.

The Global Grid Forum organizers said grid communities in the United States and Europe will now run in synch.

The Grid evolved from the early desire to connect supercomputers into “metacomputers” that could be remotely controlled. The word “grid” was borrowed from the electricity grid, to imply that any compatible device could be plugged in anywhere on the Grid and be guaranteed a certain level of resources, regardless of where those resources might come from.

Scientific communities at the conference discussed what the compatibility standards should be, and how extensive the protocols need to be.

As the number of connected devices runs from the thousands into the millions, the policy issues become exponentially more complex. So far, only draft consensus has been reached on most topics, but participants say these are the early days.

As with the Web, the initial impetus for a grid came from the scientific community, specifically high-energy physics, which needed extra resources to manage and analyze the huge amounts of data being collected.

The most nettlesome issues for industry are security and accounting. But unlike the Web, which had security measures tacked on as an afterthought, the Grid is being designed from the ground up as a secure system.

Conference participants debated what types of services (known in distributed computing circles as resource units) provided through the Grid will be charged for. And how will the administrative authority be centralized?

Corporations have been slow to cotton to this new technology’s potential, but the suits are in evidence at this year’s Grid event. As GGF chairman Charlie Catlett noted, “This is the first time I’ve seen this many ties at a Grid forum.”

In addition to IBM, firms such as Boeing, Philips and Unilever are already taking baby steps toward the Grid.

Though commercial needs tend to be more transaction-focused than those of scientific pursuits, most of the technical requirements are common. Furthermore, both science and industry participants say they require a level of reliability that’s not offered by current peer-to-peer initiatives: Downloading from Napster, for example, can take seconds or minutes, or might not work at all.

Garnering commercial interest is critical to the Grid’s future. Cisco’s Aiken explained that “if grids are really going to take off and become the major impetus for the next level of evolution in the Internet, we have to have something that allows (them) to easily transfer to industry.”

Other potential Grid components include creating a virtual observatory, and doctors performing simulations of blood flows. While some of these applications have existed for years, the Grid will make them routine rather than exceptional.

The California Institute of Technology’s Paul Messina said that by sharing computing resources, “you get more science from the same investment.”

Ian Foster of the University of Chicago said that Web precursor Arpanet was initially intended to be a distributed computing network that would share CPU-intensive tasks but instead wound up giving birth to e-mail and FTP.

The Grid may give birth to a global file-swapping network or a members-only citadel for moneyed institutions. But just as no one ten years ago would have conceived of Napster — not to mention AmIHotOrNot.com — the future of the Grid is unknown.

An associated DataGrid conference continues until Friday, focusing on a project in which resources from Pan-European research institutions will analyze data generated by a new particle collider being built at Swiss particle-physics lab CERN.

Be Sociable, Share!

All Stocks @ gr8 prices

Again posting it after a long time

Market has hit sentiments of all but again i personally feel that the worst is over for the markets and all the stocks which i was talking about have fallen in line with the markets.

I think this is the right time to pick them…I still like them not just coz i recommended it or i m holding it but the story still remains and so is the targets

So go out and buy them now

Be Sociable, Share!

wildcard dns

Although this article is for my personal use but i think that many people might have thought about the same and wondering how to do it..so thought of putting it here..

What follows is what I consider to be best practice for my personal sites and a guide for those who wish to do the same. Months ago I dropped the www. prefix from my domain in part because I think it’s redundant and also because I wanted to experiment with how Google treated valid HTTP redirect codes. The experiment has been a great success. Google seems to fully respect 301 Permanent Redirects and the change has taken my previously split PageRank has been combined and now I am at 7. There are other factors that have contributed to this, of course, and people still continue to link to my site and posts with a www. (or worse) in front of it, but overall it just feels so much cleaner to have one URI for one resource, all the time. I’m sure that’s the wrong way to say that, but the feeling is there nonetheless.

Now for the meat. What’s a good way to do this? Let’s look at our goals:

* No links should break.
* Visitors should be redirected using a permanent redirect, HTTP code 301, meaning that the address bar should update and intelligent user agents may change a stored URI
* It should be transparent to the user.
* It should also work for mistyped “sub domains” such as ww. or wwww. (I still get hits from Carrie’s bad link)

So we need a little magic in DNS and in our web server. In my case these are Bind and Apache. I am writing about this because at some point the code I put in to catch any subdomain stopped working and while I reimplemented it I decided to write about what I was doing. This method also works with virtual hosts on shared IPs where my previous method did not.

In Bind you need to set up a wildcard entry to catch anything that a misguided user or bad typist might enter in front of your domain name. Just like when searching or using regular expressions you use an asterisk (or splat) to match any number of any characters the same thing applies in Bind. So at the end of my zone DB file (/var/named/photomatt.net.db) I added the following line:

*.photomatt.net. 14400 IN A 64.246.62.114

Note the period after my domain. The IP is my shared IP address. That’s all you need, now restart bind. (For me /etc/init.d/named restart.)

Now you need to set up Apache to respond to requests on any hostname under photomatt.net. Before I just used the convinence of having a dedicated IP for this site and having the redirect VirtualHost entry occur first in my httpd.conf file. That works, but I have a better solution now. So we want to tell Apache to respond to any request on any subdomain (that does not already have an existing subdomain entry) and redirect it to photomatt.net. Here’s what I have:


DocumentRoot /home/photomat/public_html
BytesLog domlogs/photomatt.net-bytes_log
User photomat
Group photomat
ServerAlias *.photomatt.net
ServerName www.photomatt.net
CustomLog domlogs/photomatt.net combined
RedirectMatch 301 (.*) http://photomatt.net$1

The two magic lines are the ServerAlias directive which is self explanitory and the RedirectMatch line which redirects all requests to photomatt.net in a permanent manner.

There is a catch though. The redirecting VirtualHost entry must come after any valid subdomain VirtualHost entries you may have, for example I have one for cvs.photomatt.net and I had to move that entry up in the httpd.conf because Apache just moves down that file and uses the first one it comes to that matches, so the wildcard should be last.

Be Sociable, Share!

Facebook IM service will debut soon

Facebook plans to launch an instant-messaging application for members to embed on their profiles as early as next week, TechCrunch reported Friday.

Details are sketchy, but it appears that this will be a Web-based IM service that would allow Facebook users to chat with other people on their friends lists without needing to go through a third-party program. Additionally, TechCrunch’s Michael Arrington detailed, the service would likely be based on the Jabber open-source platform, which would mean that third-party “universal IM” clients like Pidgin, Trillian, and Adium would be able to implement it.

Facebook representatives were not immediately available for comment.

It goes without saying that instant messaging is a logical step for a social network–it’s an activity in which millions of Web users partake, and it would keep those coveted “user engagement” rates high. Facebook’s obviously not the first one to have this idea: A number of third-party Facebook Platform applications facilitate instant messaging between Facebook users, and Arrington notes that those developer programs would be effectively killed if Facebook launched an in-house rival.

That said, other major social networks have some kind of in-house instant-messaging functions now: MySpace operates MySpaceIM, for example, and AOL’s recent acquisition of Bebo will integrate the social network closely with its AIM client. If anything, it’s surprising that Facebook didn’t build something like this months ago.

Be Sociable, Share!

Is The Planned Blackberry Blackout On Sunday "Routine Maintenance"?

Blackberry subscribers on some Indian wireless operators such as AirTel & Vodafone, received notifications today from their providers’ corporate services departments informing them that Blackberry services would be unavailable on Sunday morning between 0730h and 1130h.

While RIM declined to comment, some of the operators we contacted, said it was “routine maintenance.”

Vodafone’s technical helpdesk said that it was an annual maintenance operation, originally scheduled for January which got shifted to March. A representative said that RIM chose the time after conducting a survey and determining the lowest email period.

A spokesperson for Airtel said that RIM had informed them it was routine outage, involving other parts of South East Asia apart from India. She further added that AirTel did not expect the actual outage period to last more than an hour.

A senior tech executive at Reliance Communications said that the company was earlier intimated about an outage and that they had informed their subscribers that Blackberry services would be unavailable on Sunday morning between 0730h and 1130h.

While it’s highly likely that this is indeed a normal infrastructure upgrade drill, the recent government threat to blackout the service adds a twist to the tale. Telecom Secretary S Behura, yesterday said that there was no question of a ban , though some of the “solutions” being discussed such as deployment of “mirror servers” are already raising eyebrows.

Moreover, today’s Business Standard reports :

DoT, however, pointed out that due to home ministry objections it had already informed all operators to stop Blackberry services by the end of December. However, responding to requests, operators were given a three-month extension, which ends in March.

The conflicting reports from operators and RIM’s silence seems to suggest that there could be more to this “routine maintenance” than meets the eye.

Be Sociable, Share!

Ubuntu tops desktop, server Linux enthusiast poll

Ubuntu is the favourite distribution of Linux for use on both desktops and servers, according to a poll of Australian open source enthusiasts.

The survey, which was conducted by Sydney-based consultancy Waugh Partners, also found that Queensland is the best state in which to study open source and proprietary source developers are paid less than their open source counterparts.

In a video interview conducted at linux.conf.au in Melbourne last month, Jeff and Pia Waugh of Waugh Partners revealed some initial trends from the survey’s findings.

The survey showed that Ubuntu came top, followed by Fedora, Red Hat Enterprise Linux and then SUSE.

Jeff Waugh said it wasn’t difficult to see why Ubuntu was so popular: “They have done a very good job of [making a product of] a very sexy, simple desktop. It comes on one CD, it’s easy to install, it comes with great hardware support and it’s easy to use.”

The survey also revealed which universities produced the most open source enthusiasts. Waugh Partners found that Queensland University of Technology came first, followed by the University of Sydney. RMIT and UNSW were under-represented.

“I was quite surprised that the University of Sydney was second, I actually went there but all of my friends — who were great open source hackers — went to the University of New South Wales,” said Jeff Waugh.

The survey also found that by taking an interest in open source software, enthusiasts quickly found paid work — and often got paid more than they would have if developing proprietary code

“You can see a direct correlation between coming into the community and getting industry employment,” said Pia Waugh, adding that not only does involvement in the open source community result in greater employment prospects, but the likelihood of a better position upon entering the workforce.

“Open Source developers get paid more than propriety software developers,” Jeff Waugh told ZDNet.com.au, “so that’s pretty sweet.”

The survey covered 327 participants, and was sponsored by Fujitsu, IBM and NICTA. Only seven percent — or 23 — of respondents were female.

The complete survey results, which will include respondents that develop open source software for a living, will be released in March.

Be Sociable, Share!
1 7 8 9 10 11 23