WordPress founder talks traffic, new features to Web 2.0 crowd

You have to hand it to WordPress founder Matt Mullenweg. At his talk at today’s Web 2.0 Expo in San Francisco, he managed to be the first conference speaker to put up a picture of a LOLcat while actually tying it into what his company is all about.

The LOLcat in question came from icanhazcheeseburger, a notoriously popular site that rakes in a whopping 1 million unique page views a day. It also runs on WordPress.com, Mullenwag and company’s hosted blogging platform.

While the talk was classified as a “high-order bit,” which usually involves some subtle advertising, Mullenweg used his time to talk about how much the site has grown over the last few years, as well as a downright useful feature that will be available to blog owners next week.

The new feature, called “possibly related,” scans every post you’ve written and gives your readers a list of your other posts that might be of interest, along with links to other WordPress.com blogs that line up with the keywords or context.

If this sounds familiar, it is. The technology comes from Sphere, which WordPress has partnered with. Mullenweg said that it should give the 99.997 percent of WordPress.com blogs that are getting less than 10K page views, a little love from being a part of the network.

The new feature is also the company’s attempt to help solve the problem that visitors face when viewing a permalinked page from somewhere else, often leaving them at the whim of the blog creator and their linking abilities. Mullenweg explained it as a situation that usually has people leaving the page and not coming back. The company will also be tracking the click data and potentially make it available for other upcoming WordPress features.

“Possibly related” will roll out to WordPress.com users next week, as well as a plug-in for WordPress.org users who are hosting it on their own. The service is opt-in, meaning you won’t get listed on other people’s possibly related link dumps unless you’ve got it installed on your own blog. Mullenweg noted this was not only because of privacy, but to give people an incentive to add it to their blogs to get the reciprocating traffic.

Speaking of traffic, another takeaway from Mullenweg’s talk were the usage statistics over the past few years. There were just 2 million unique users of WordPress.com in early 2006. That number has since gone up to 168 million this year. Of that, a staggering 54 million come from the U.S. alone.

Part of the reason for the growth has been some mainstream blogs using WordPress.com, including Flickr’s company blog, The FAIL Blog, and the aforementioned icanhazcheeseburger.

Mullenweg’s “one last thing” was to show off was an upcoming theme called “chameleon” that will change the color scheme, and look and feel of your site based on what photos you post. Themes, which have become a veritable commodity with their own store have proven to be a huge success among WordPress.org users. This marks the first time a company theme has taken such a high level of automatic customization–something that third-party theme-makers have been making money off with their own efforts.

Be Sociable, Share!

Belarc Advisor


Belarc Advisor is one of those tools for Windows users that you didn’t know you were missing until you started using it. It’s hard to understate how important this program can be, as it provides a free analysis of your machine’s security weak points.

By looking at elements such as whether antivirus software and definitions are up to date, or whether all the security flaws in Windows have been patched, Belarc works quickly to inform you of what you’re missing and provide links to how you can fix it. It uses the Center for Internet Security (CIS) benchmark test to give the computer a score showing its overall security level and produces a report that can be viewed in a Web browser.

Not only does it analyze software and operating system components and tell you where problems are, but in its comprehensive report it tells you what your computer’s physical components are: not just how much RAM you have, for example, but what kind of RAM and which slots are occupied. Simply put, the clear advice given on how to address each issue is invaluable.

Be Sociable, Share!

The Grid: The Next-Gen Internet?

The Grid: The Next-Gen Internet?
The Matrix may be the future of virtual reality, but researchers say the Grid is the future of collaborative problem-solving.

More than 400 scientists gathered at the Global Grid Forum this week to discuss what may be the Internet’s next evolutionary step.

Though distributed computing evokes associations with populist initiatives like SETI@home, where individuals donate their spare computing power to worthy projects, the Grid will link PCs to each other and the scientific community like never before.

The Grid will not only enable sharing of documents and MP3 files, but also connect PCs with sensors, telescopes and tidal-wave simulators.

IBM’s Brian Carpenter suggested “computing will become a utility just like any other utility.”

Carpenter said, “The Grid will open up … storage and transaction power in the same way that the Web opened up content.” And just as the Internet connects various public and private networks, Cisco Systems’ Bob Aiken said, “you’re going to have multiple grids, multiple sets of middleware that people are going to choose from to satisfy their applications.”

As conference moderator Walter Hoogland suggested, “The World Wide Web gave us a taste, but the Grid gives a vision of an ICT (Information and Communication Technology)-enabled world.”

Though the task of standardizing everything from system templates to the definitions of various resources is a mammoth one, the GGF can look to the early days of the Web for guidance. The Grid that organizers are building is a new kind of Internet, only this time with the creators having a better knowledge of where the bottlenecks and teething problems will be.

The general consensus at the event was that although technical issues abound, the thorniest issues will involve social and political dimensions, for example how to facilitate sharing between strangers where there is no history of trust.

Amsterdam seemed a logical choice for the first Global Grid Forum because not only is it the world’s most densely cabled city, it was also home to the Internet Engineering Task Force’s first international gathering in 1993. The IETF has served as a model for many of the GGF’s activities: protocols, policy issues, and exchanging experiences.

The Grid Forum, a U.S.-based organization combined with eGrid – the European Grid Forum, and Asian counterparts to create the Global Grid Forum (GGF) in November, 2000.

The Global Grid Forum organizers said grid communities in the United States and Europe will now run in synch.

The Grid evolved from the early desire to connect supercomputers into “metacomputers” that could be remotely controlled. The word “grid” was borrowed from the electricity grid, to imply that any compatible device could be plugged in anywhere on the Grid and be guaranteed a certain level of resources, regardless of where those resources might come from.

Scientific communities at the conference discussed what the compatibility standards should be, and how extensive the protocols need to be.

As the number of connected devices runs from the thousands into the millions, the policy issues become exponentially more complex. So far, only draft consensus has been reached on most topics, but participants say these are the early days.

As with the Web, the initial impetus for a grid came from the scientific community, specifically high-energy physics, which needed extra resources to manage and analyze the huge amounts of data being collected.

The most nettlesome issues for industry are security and accounting. But unlike the Web, which had security measures tacked on as an afterthought, the Grid is being designed from the ground up as a secure system.

Conference participants debated what types of services (known in distributed computing circles as resource units) provided through the Grid will be charged for. And how will the administrative authority be centralized?

Corporations have been slow to cotton to this new technology’s potential, but the suits are in evidence at this year’s Grid event. As GGF chairman Charlie Catlett noted, “This is the first time I’ve seen this many ties at a Grid forum.”

In addition to IBM, firms such as Boeing, Philips and Unilever are already taking baby steps toward the Grid.

Though commercial needs tend to be more transaction-focused than those of scientific pursuits, most of the technical requirements are common. Furthermore, both science and industry participants say they require a level of reliability that’s not offered by current peer-to-peer initiatives: Downloading from Napster, for example, can take seconds or minutes, or might not work at all.

Garnering commercial interest is critical to the Grid’s future. Cisco’s Aiken explained that “if grids are really going to take off and become the major impetus for the next level of evolution in the Internet, we have to have something that allows (them) to easily transfer to industry.”

Other potential Grid components include creating a virtual observatory, and doctors performing simulations of blood flows. While some of these applications have existed for years, the Grid will make them routine rather than exceptional.

The California Institute of Technology’s Paul Messina said that by sharing computing resources, “you get more science from the same investment.”

Ian Foster of the University of Chicago said that Web precursor Arpanet was initially intended to be a distributed computing network that would share CPU-intensive tasks but instead wound up giving birth to e-mail and FTP.

The Grid may give birth to a global file-swapping network or a members-only citadel for moneyed institutions. But just as no one ten years ago would have conceived of Napster — not to mention AmIHotOrNot.com — the future of the Grid is unknown.

An associated DataGrid conference continues until Friday, focusing on a project in which resources from Pan-European research institutions will analyze data generated by a new particle collider being built at Swiss particle-physics lab CERN.

Be Sociable, Share!

All Stocks @ gr8 prices

Again posting it after a long time

Market has hit sentiments of all but again i personally feel that the worst is over for the markets and all the stocks which i was talking about have fallen in line with the markets.

I think this is the right time to pick them…I still like them not just coz i recommended it or i m holding it but the story still remains and so is the targets

So go out and buy them now

Be Sociable, Share!

wildcard dns

Although this article is for my personal use but i think that many people might have thought about the same and wondering how to do it..so thought of putting it here..

What follows is what I consider to be best practice for my personal sites and a guide for those who wish to do the same. Months ago I dropped the www. prefix from my domain in part because I think it’s redundant and also because I wanted to experiment with how Google treated valid HTTP redirect codes. The experiment has been a great success. Google seems to fully respect 301 Permanent Redirects and the change has taken my previously split PageRank has been combined and now I am at 7. There are other factors that have contributed to this, of course, and people still continue to link to my site and posts with a www. (or worse) in front of it, but overall it just feels so much cleaner to have one URI for one resource, all the time. I’m sure that’s the wrong way to say that, but the feeling is there nonetheless.

Now for the meat. What’s a good way to do this? Let’s look at our goals:

* No links should break.
* Visitors should be redirected using a permanent redirect, HTTP code 301, meaning that the address bar should update and intelligent user agents may change a stored URI
* It should be transparent to the user.
* It should also work for mistyped “sub domains” such as ww. or wwww. (I still get hits from Carrie’s bad link)

So we need a little magic in DNS and in our web server. In my case these are Bind and Apache. I am writing about this because at some point the code I put in to catch any subdomain stopped working and while I reimplemented it I decided to write about what I was doing. This method also works with virtual hosts on shared IPs where my previous method did not.

In Bind you need to set up a wildcard entry to catch anything that a misguided user or bad typist might enter in front of your domain name. Just like when searching or using regular expressions you use an asterisk (or splat) to match any number of any characters the same thing applies in Bind. So at the end of my zone DB file (/var/named/photomatt.net.db) I added the following line:

*.photomatt.net. 14400 IN A 64.246.62.114

Note the period after my domain. The IP is my shared IP address. That’s all you need, now restart bind. (For me /etc/init.d/named restart.)

Now you need to set up Apache to respond to requests on any hostname under photomatt.net. Before I just used the convinence of having a dedicated IP for this site and having the redirect VirtualHost entry occur first in my httpd.conf file. That works, but I have a better solution now. So we want to tell Apache to respond to any request on any subdomain (that does not already have an existing subdomain entry) and redirect it to photomatt.net. Here’s what I have:


DocumentRoot /home/photomat/public_html
BytesLog domlogs/photomatt.net-bytes_log
User photomat
Group photomat
ServerAlias *.photomatt.net
ServerName www.photomatt.net
CustomLog domlogs/photomatt.net combined
RedirectMatch 301 (.*) http://photomatt.net$1

The two magic lines are the ServerAlias directive which is self explanitory and the RedirectMatch line which redirects all requests to photomatt.net in a permanent manner.

There is a catch though. The redirecting VirtualHost entry must come after any valid subdomain VirtualHost entries you may have, for example I have one for cvs.photomatt.net and I had to move that entry up in the httpd.conf because Apache just moves down that file and uses the first one it comes to that matches, so the wildcard should be last.

Be Sociable, Share!

Facebook IM service will debut soon

Facebook plans to launch an instant-messaging application for members to embed on their profiles as early as next week, TechCrunch reported Friday.

Details are sketchy, but it appears that this will be a Web-based IM service that would allow Facebook users to chat with other people on their friends lists without needing to go through a third-party program. Additionally, TechCrunch’s Michael Arrington detailed, the service would likely be based on the Jabber open-source platform, which would mean that third-party “universal IM” clients like Pidgin, Trillian, and Adium would be able to implement it.

Facebook representatives were not immediately available for comment.

It goes without saying that instant messaging is a logical step for a social network–it’s an activity in which millions of Web users partake, and it would keep those coveted “user engagement” rates high. Facebook’s obviously not the first one to have this idea: A number of third-party Facebook Platform applications facilitate instant messaging between Facebook users, and Arrington notes that those developer programs would be effectively killed if Facebook launched an in-house rival.

That said, other major social networks have some kind of in-house instant-messaging functions now: MySpace operates MySpaceIM, for example, and AOL’s recent acquisition of Bebo will integrate the social network closely with its AIM client. If anything, it’s surprising that Facebook didn’t build something like this months ago.

Be Sociable, Share!
1 2 3 4 16