The Top Five Myths of SEO (IMHO) – Myth #2

Continuing in my short series of five big SEO myths, this one is perhaps the most controversial of the concepts I’m going to tackle.

In the first post in the series, I laid into the discredited but still apparently widespread practice of stuffing keywords into the meta tags of a web page. My research into how keywords are used by search engines also led to me taking a long hard look at the notion of Keyword Density and the idea that there is some magic optimum number that will make all the difference between search engine success and failure.

For those of you who already know what Keyword Density is and why it’s deemed so important, I might as well get this out of the way right up front: frankly, I’m just not buying it.

Quick disclaimers:

i. As with all the posts in this series, I’m writing from the perspective of a Public Relations bloke. My observations relate to how news releases and editorial copy perform in search engine terms; the same thoughts are not necessarily going to hold water when looked at from a broader web content perspective.

ii. I still have a lot to learn about all this stuff. If I get things wrong (as I inevitably will) I will add updated and corrected info in future posts.

OK. Onwards. If you want the really short version:

From what I’ve learned, Keyword Density is not entirely irrelevant, but it’s far from being the most important determinant of SEO success.

Rather than worry about achieving an optimum density percentage, people would do a lot better to focus on writing good, interesting copy.

[Note: I’m drawing heavily on the fact that I spent many years working in the knowledge management software business before moving into PR. I would never have considered myself a true KM expert, and I’m certainly not an expert in SEO – I’m a mere flack, after all – but I think I learned enough about keyword-based indexing and search techniques to be mildly dangerous. I’ve also dredged up from memory some of the old examples and thought models we used to use back in my KM days. Grateful credit to a number of my old KM buddies for seeding the dark and dusty corners of my mind with some of these still useful examples.]

Keyword Density is, according to Wikipedia’s simple definition:

…the percentage of times a keyword or phrase appears on a web page compared to the total number of words on the page.

Let’s say you’re searching for the keyword “bogus” and you come across a 100-word document that happens to include that keyword six times — that document has a density of six out of 100 for the keyword “bogus”, or:

6/100 = 0.06 – expressed as a percentage a keyword density of 6%

The same document would probably have a totally different keyword density for other words, obviously. It’s all relative. This density thing is considered important to SEO experts for all kinds of purportedly good reasons. Let’s dig into it and I’ll try to explain how I think this stuff works…

Think of the way a search engine functions. A potential customer sitting in front of the search engine is trying to find information that is important to them. As a search engine developer, you want to offer up useful and meaningful results when they search. Using only the simple keywords the user provides, somehow you have to try to figure out what information would matter most to that individual right now.

This is a massively hard thing for any computer system to do. Most of us aren’t really terribly good at searching — it’s hard for us to translate the concepts and ideas we’re looking for into simple keywords.

At the other end of the search pipe, it’s almost indescribably challenging to build a computer system that can understand what all the stuff out there on the Web is about. And “aboutness” is really, really important. To a computer, the words and phrases in a document are just bits: ones and zeroes. They have no meaning; the computer doesn’t know what the document is about.

People know that a certain arrangement of words on a page, with spaces and punctuation just so, will turn a set of otherwise random characters into something that has meaning; that has aboutness.

Think of it this way: say you’ve forgotten both the name and the author of an old poem you remember learning as a child. You recall the sense of the thing, but you can’t remember how it went.

So you wander into a favourite second-hand bookstore to see if you can find a copy. Without even the poet’s name, though, you’re going to be kind of hosed.

Luckily, the ancient shopkeeper (let’s call him Mr. Ptolemy) is both exceptionally well-read and has a prodigious memory.

Trying to describe the poem to our friendly bookstore owner, you mention that it’s about the choices we all have to make in life, and the consequences we will inevitably face from those choices as we grow older.

Somehow, splendid chap that he is, Mr. Ptolemy is able to discern that you’re talking about Robert Frost’s “The Road Not Taken“.

He understood precisely what you meant and, as he recites a couple of favourite lines (“…Two roads diverged in a wood, and I– I took the one less traveled by, And that has made all the difference“), it all snaps into place. Yes! That’s exactly the poem I’m looking for!

Now try to imagine sitting in front of the Web version of Google and achieving the same result. What keywords would you have used? “Life” and “Choices” perhaps? Neither of those words appears anywhere in the poem. So where are you going to start?*

You have the sum of all human knowledge at your fingertips, but all you can do is describe what the document you want is broadly about. And all the computer can do is a kind of textual number-crunching based on word frequency, link relationships and keyword concepts.

Do you see how hard this stuff is for the people who build search engines?

Without getting deep into the kind of incredibly clever semantic search stuff my friends at TextWise do (disclosure: they’re a client), it’s really quite amazingly hard for most software systems to understand in any real way what even a simple document is about. So search engines were built around certain compromises.

Typically, in documents, web pages and things like that, there is going to be some kind of discernible relationship between the words they contain and what the document is actually about (unless, it seems, we’re looking at poetic metaphors). A document that uses the word “astrophysics” several times is likely (but far from certain) to have something to do with the general topic of astrophysics.

From this, we can infer that a whole bunch of documents and web pages with many similar words (astrophysics, astrophysicists, cosmologists, cosmology, etc.) are more likely to be about the same thing than documents with no similar words. This is useful, because it means we can start grouping stuff together into clusters of inferred aboutness.

(Homonyms tend to bugger this all up, I’m afraid. Our astrophysicist would mean something quite specific if she searched for “stars”. To a teenage celebrity gossip junkie, the same keyword means something entirely different. And a poor chap who just had difficulty spelling the word “asterisk” would be even more confused. But let’s not get too far down that path – semantic disambiguation blows my mind.)

By now, you should have already figured out how some of the earliest search engines worked.

  1. Build a really, really big index of words and pointers to where they appear in lots and lots of documents.
  2. Use the frequency of word-use as a guide to which documents are most likely to be about the topics your searcher is interested in.
  3. Layer on some synonym cleverness and you’ve got the start of a workable way to navigate through an ever-expanding online corpus of knowledge.

It’s from this approach that the notion of keyword density rose to prominence in the SEO world.

Unscrupulous marketers in the early days of the web figured out that early, dumb search engines could be fooled. A document that included the word “astrophysics” in every second sentence might, the theory went, end up being ranked as the single most relevant and useful document about astrophysics in the entire universe. (It wasn’t really quite this unsubtle, but you get my drift).

Having worked out the importance of density, web marketing monkeys started stuffing their pages with hidden keywords. Remember that old practice of embedding white text on the white background of a page? That was a density game.

The search companies quickly caught on though, as the Wikipedia entry notes:

In the late 1990s, which was the early days of search engines, keyword density was an important factor in how a page was ranked. However, as webmasters discovered this and the implementation of optimum keyword density became widespread, it became a minor factor in the rankings. Search engines began giving priority to other factors that are beyond the direct control of webmasters. Today, the overuse of keywords, a practice called keyword stuffing, will cause a web page to be penalized.

If you do any research into this stuff at all, you’ll soon see that there’s something of a balancing act going on. On the one hand, you don’t want to get downranked as a spammer for having too many keywords stuffed into your web pages. On the other, you don’t want to run the risk of ranking too low by not including enough keywords.

There’s a two-step consulting process taking place out there:

  1. Help the client figure out the most important keywords that will attract the right audience to their web pages (e.g. people who want to buy a couch in Canada are probably searching for “chesterfield” not “setee”);
  2. Optimize all web content to hit the right proportion of keywords-to-text throughout.

The general consensus right now seems to be that maintaining a keyword density of between 2-3% in your web content is optimal.

Any higher than 3% and you might get marked as spam, any lower than 2% and you’re just not even on radar. These numbers vary widely, mind: I’ve seen optimal density recommendations as high as 8% – which seems insane to me.

Think about this in PR terms for a second: to achieve 2-3% recommended density in a short, 400-word news release, you’d need to repeat the chosen keyword 8-12 times. We’ve all read news releases like that – the ones that sound like they were written by robots.

Here’s the thing, though: other than a relatively small group of real experts (the people who actually build the search engine algorithms at Google and elsewhere) no one really seems to know whether keyword density has any impact on search engine results.

In fact, I’ve been unable to find a single shred of evidence that any major search engine in use today gives preference to a particular ratio of keywords in web pages.

There are a lot of conflicting opinions out there, and I could be 100% wrong about this, but stick with me…

In all of the reading I’ve been doing on this topic, it was one particular comment from Eric Brantner at the site Reve News (geddit?) that really sparked my skepticism. In a piece titled “Keyword Density: The SEO Myth that Never Dies”, Eric writes:

The simple truth is search engines are far too advanced to be tricked by something as basic as an optimal keyword density

…and that makes a great deal of sense to me.

As an aside, I think one part of the problem is that people often completely misinterpret the idea behind those optimal density numbers. It’s easy to assume “recommended density” should be taken as a guide to add more keywords into a web page until you hit the magic ratio, and there are scores of online keyword density calculators that promise to help you figure out your sweet spot.

In fact, if keyword density measures are important at all, they’re primarily useful in helping to manage keyword overload — to ensure your content doesn’t get discounted as spam.

Optimal density is something you’re encouraged to work down to, not up towards. There’s a good article on this topic at the delightfully snarky SEOElite blog and another useful analysis on the well-known SEO Tools site.

Getting back to the main point, though, I’ve come across a number of sources making the (entirely believable) assertion that keyword density on a single document doesn’t actually matter much at all. And here’s why: keyword density is an internal measure. It ignores the fact that no web page is an island.

In other words: assessing keyword density can only tell you something about the individual web page (and its numeric placement in a simple ranking table) – it’s a way of analyzing word frequency in a document in relation only to the document itself.

Think of a great long list of documents, arranged in order of percentage density for the keyword “street”.

– At the top of the list is a document that has a very high density, as it contains the keyword many thousands of times in a 2,000 page file (let’s say it has a density of around 8%).
– Way further down the list is a web site that mentions the word fifty times out of 35,000 words (0.14% density).
– Somewhere in the middle is a Wikipedia entry with 133 uses of the keyword out of 2,700 words (5% density).

So which of these is actually the most relevant document? The answer, of course, all depends on what you’re looking for.

That first document in the list includes the word “street” thousands of times because it’s the Yellow Pages. Probably not what you had in mind.

The web site with a keyword density of less than 1% is the hip young online magazine you’re looking for – the one that just happens to be about all things “Street”, but is way too fearsomely cool to use the word more than a handful of times in its masthead and elsewhere.

At this point, the logic of my analogy crumbles and leaks rather, but you get the point. Just because a document uses the same word lots of times (or even just enough times) does not mean it’s the most relevant and useful document for every search.

It’s like: if I stood in front of an audience for an hour and dropped the word “astrophysics” into every fifth sentence, a completely unsophisticated listener might assume that I know something about astrophysics just because I used the word a lot.

But linguistics research has shown that frequency has no bearing on relevance – and it doesn’t take any kind of research to prove that I know the square root of bugger all about astrophysics (nor about SEO, for that matter).

The best and most advanced search engine algorithms (such as those in place at Google, for example) are designed to index and “understand” words in a document in the context of the index in which that document appears. The ultimate search engine, perhaps, would be one that (amongst its weaponry) had the ability to understand the true relevance of any single document when compared with every single other document in the known dataverse.

Again: the fact that a particular document happens to use a certain keyword a dozen times does not necessarily mean it is an authoritative source of info related to that keyword. Good search engines know this and have largely devalued keyword density as a ranking parameter. It’s still used, but it is not nearly as important a measure as it was way back at the dawn of the Web.

In short: frequency is not the same as relevance.

SEO efforts that focus too slavishly on achieving the optimum keyword density run the risk of creating dry, robotic copy that’s a nightmare for human visitors to read, and may even be down-ranked by sophisticated search engines.

Perhaps I’m being naive here, but I can’t help thinking that the goal of the search engines is to work the way our Mr. Ptolemy does in the bookstore example above. The search engine tries to understand what it is you’re really interested in, and offer that stuff up to you through the browser.

Google uses more than 200 different signals to try to determine the best information to offer up for any search, and they change their algorithms (by some accounts) several times a week. In the midst of all this high-power computing, what they’re trying to do is mimic a really good human guide. They do this by looking for the cues to what other people deem to be the most valuable, relevant, useful and interesting content on any topic – using all kinds of different “signals”.

With all that sophistication going on, I can’t help but think that such a simplistic notion as “keyword density” is a real red herring. Good content, well written, is as important today as it has always been. Write something useful, meaningful, intelligent, newsworthy or just genuinely interesting (or all of these), and the search engines will find you.

Before I shut up about this, a final thought on keywords. I’ve laid into them pretty hard in the first couple of posts here, and I don’t want anyone getting the wrong idea. While I’m just not ready to go along with the magic “optimal keyword density” malarkey, I’m still a firm believer in the importance and value of using the right keywords for the audience you hope to attract.

Keywords are, after all, the simple inputs we use to search – so it’s important to research and understand the words, phrases, synonyms and circuitous routes that bring people to your site. Studying your site analytics can be great for this.

In the last 24 hours, I know that people have come to my blog through searching for me by name (with all kinds of creative misspellings) or by searching for such diverse things as:

social media experts
future of branding
twitter policy
the machine stops
i hate vista

(I’m still the #1 ranked site in Canada for this last example, btw – and do you think Microsoft has ever reached out to me in any way?)

Studying the keywords people use to find you can teach you a lot. They’re still the key drivers of search and any professional communicator will want to be sure they’re using the same kind of vocabulary as the potential audience they’re seeking to engage. Again, there are a lot of online tools you can use to experiment with keywords. Go Google.

Just don’t get too hung up on any spurious notions of optimal keyword density, OK?

*[In case you’re wondering, if you Google “poem about life choices“, without the quotation marks, one of the top five results just happens to be a link to Robert Frost’s poem. Darn it. This doesn’t mean that any part of my argument is necessarily invalid, though. It simply proves that I’m not very good at coming up with illustrative examples for some of my points.]

Back to Myth #1: The Importance of Keyword Meta Tags
Next up – Myth #3: On-page optimization is the thing

The Top Five Myths of SEO (IMHO) – Intro and Myth #1

This is the first in a short series of posts exploring what I believe are some of the top myths in Search Engine Optimization. I was going to throw all five myths into a single post, but then I realised that would make for an even more than usually lengthy piece, so I’ve split the whole thing up into (slightly) shorter chunks.

I’ve been doing a great deal of reading about Search Engine Optimization (SEO) in the last few months, partly out of general professional interest, and partly in order to better understand certain aspects for some of our client work.

There’s a necessary and logical connection between Public Relations and SEO. Search engines like news – frequently updated, fresh content. This is the rationale behind Google News and the Yahoo! home page looking a lot like an online newspaper. As a flack, I’m kind of in the business of news and, more particularly, in the business of helping clients to get their news in front of as many of the right people as possible.

This is a deliberate over-simplification, but one of the primary tools we use in PR to convey a client’s story is, of course, the news release. It’s been said before that in the old days 80 to 90 per cent of the expected audience for a news release was members of the media. With the disintermediating effect of the Internet, the thinning out of media, and the growth of online audiences, as much as 50 per cent of the audience for any news release comes directly to the release through search. It’s direct-to-consumer PR, in other words.

The main news wire services have seen this in the growth of direct traffic to their websites. News feeds that once ran directly into the specialised editorial systems in traditional news rooms, available only to journalists, stock traders and a select few others, are now widely available online for anyone to see just by visiting CNW Group, Marketwire, Businesswire or one of the newer, online-only distribution services. [Disclosure: I should probably mention, just in case, that CNW Group continues to be a valued and valuable client].

With news going direct to consumers, and directly into the indexes of the main search engines, it makes sense that the issuing organizations should pay attention to the way those search engines handle their news. If you think of yourself as one of the leading sources on a particular subject, you want to make sure your sage pronouncements and carefully-crafted messages are showing up high and bright in Google searches to do with that subject.

Our opinions today are formed and shaped by what we learn online. The vast majority of product purchase decisions are supported by online research, as are investment decisions and service choices. In this research-driven market, it’s increasingly important to rank at or near the top of search results. I’ve seen comments suggesting that if you are on the second or third page of results you might only get one per cent of the search traffic that the top ranked site gets – and I can well believe that.

Hence, there is a natural relationship between the practice of Search Engine Optimization and the business of PR. Really good PR is, I think, a form of story-telling. Good SEO, it seems, is the practice of ensuring those stories reach the right ears (or eyes).

After months of online and offline research, soaking up as much information as I’ve been able to handle in spare hours, I still feel I’ve only just scratched the surface of this weird and nebulous topic. It’s a moving target, that much is clear. As the major search engines continue to refine their algorithms to produce ever better results, the paid optimization consultants flex and respond in efforts to keep their client content as close to the top of the search results as possible.

I’m looking forward to the upcoming Search Engine Strategies conference, coming to Toronto in early June – hoping to learn a lot more from some of the most active participants in the field, including the luminously intelligent Andrew Goodman of Page Zero Media and a host of other interesting speakers and search technology experts.

One thing I’m keen to test is a personal theory I’ve arrived at through research and analysis over the past couple of months. I’m hoping to engage some of the speakers and attendees at the conference to see if what I’ve come to understand about the current state of SEO is true. In particular, I’ve synthesized a set of what I believe are giant myths about the way SEO works – ill-founded claims that still keep popping up all over the place but, from what I’ve learned, can’t possibly be valid – even if they once were.

Obvious, up-front caveat: just in case it’s not clear enough already, I’m really not an expert in this stuff. It’s entirely possible I could be talking out of my ningnong here, but this stuff seems to make sense with what I’ve been able to learn and test in the last couple of months.

MYTH #1: The importance of keyword meta tags

I’m going to start with something that should be really basic, 101 level stuff to many of you – but it’s startling how many people who seem interested in SEO don’t know about this.

If you look at the source code of just about any web page, you’ll see a whole bunch of special code elements called the “meta tags”. I’m not going to go into detail about them here; you can learn a ton of information about meta tags on some much better sites than this one, if you’re interested.

Suffice to say, the meta tags are, as the name suggests, a kind of special metadata, that can be used to describe the content and structure of the page. The “Title” meta tag, for example, determines what text appears in your browser’s title bar as you’re viewing the page. There are other meta tags for Description, Language, and so on.

One of these meta elements, the “Keywords” tag, is a relic of the early architecture of the World Wide Web, from way back in the pre-Google days. The first search engines (WebCrawler, Magellan, Alta Vista, Lycos, and others) looked for this hidden tag as a key set of clues to the topic of your website. Webmasters were supposed to use the Keywords meta tag to list some of the main subject keywords describing the content of the page – like a library index card describing what the page was about.

Of course, many people quickly caught on to the idea that this could be gamed. Stuffing a competitor’s product names into your keywords was a quick and dirty way to try to steal some of their attention. Listing multiple synonyms for topics of interest to your target customers was another common form of “keyword stuffing” – trying to artificially increase the rank of your page by making it appear more relevant to a broad array of topics.

This kind of abuse became so rampant that it quickly led to the Keywords meta tag becoming completely ignored by modern search engines. Although many people still use it, and a lot of self-proclaimed SEO experts still seem to recommend it, the Keywords meta tag seems to be as vestigial as your appendix.

From what I’ve been able to glean, Yahoo! is alone among the major search engines in still giving this meta tag some (minor) weight. Google, it seems, has never put any value on the information in this tag. In discussing this with others, I had a couple of people question whether there was any evidence to this effect, so I went hunting.

It’s hard to find any concrete word from Google on this subject, but here’s something useful. In the comments of this post on the Official Google Webmaster Central blog, you’ll find the blog author, Google employee John Mueller, says:

…we generally ignore the contents of the “keywords” meta tag. As with other possible meta tags, feel free to place it on your pages if you can use it for other purposes – it won’t count against you.

Also, the Wikipedia page about meta tags states:

With respect to Google, thirty-seven leaders in search engine optimization concluded in April 2007 that the relevance of having your keywords in the meta-attribute keywords is little to none

This is a reference, btw, to an excellent study published at SEOmoz, one of the definitive pieces on search engine ranking factors.

So you think by now the word would be out and people would have stopped going on about the Keywords meta tag. And yet I have direct experience of “experts” who are actively charging clients for stuffing words into this part of their web page source code, claiming that it will help improve their ranking in the search engines.

It won’t. Try this yourself: run a Google search for “keywords meta tag” – without the quotation marks. I don’t want to link this, I want you to run the search for yourself. Now read what the first three or four articles that come up have to say about the subject.

Better? Good – now stop paying your SEO consultant for something that’s just plain useless.

In short: using the Keywords meta tag in your web pages won’t necessarily hurt your rank in search engine results, but it absolutely won’t help either.

Next Myth: The Magic Keyword Density Percentage

Now we are six

It’s the second time I’ve thought of using that title for a post, and the connection between the two thoughts exists on several levels.

The first time was when Ruairi turned six years old last December. This time: well, this time we’re not talking in ages. To my considerable surprise, and largely thanks to Ruairi’s patient influence, we seem to have acquired a sixth family member.

Say hello to Cooper – born February 21, 2009, which makes him a mere nine weeks old by my count.

Cooper’s a kind of ‘doodle mashup. Somewhere between an Australian Shepherd and a Standard Poodle with, we think, a trace of something else stirred in for good luck. He’s also 8% toe-licker, 6% rug-worrier, 4% random sneezer, and at least 82% heart-breaker.

We have a dog. Crikey. We have a dog.

If you’d have asked me last week, I would still have told you, with confidence, that I was a confirmed cat person – yet here I am, falling hard for the finest bag o’ rags scruffy pup in the known universe.

OK, back up. How did we get here?

Cooper is, essentially, a promise kept. We’re all animal lovers, but of all of us, the most utterly devoted to beasts of every variety is certainly Ruairi. He’s been asking for a dog for as long as I can remember, but at least since he was three years old. We promised him long ago he could have a dog when he reached Grade One – once he was in full-time school. For the past few months, the quiet campaign has intensified, and we knew we were going to have to do it soon. Still and all, we kept finding reasons to put off the decision.

Leona and I both grew up around dogs, but it took me a long time to get my head around the idea of raising a pup. Then last weekend, as I walked down the hill into Riverdale Park for one of Charlie’s cross-country practice sessions, it all finally clicked into place. Nothing like a walk in the park to make you realise how a dog could fit into your life.

So – after months and months of research, reading, talking to friends, and observing the hundreds of neighbourhood pooches – we’ve finally done it. Yesterday afternoon we made the long trek out through one of the filthiest storms I’ve ever driven through, to the rural calm of Wallenstein, Ontario and a lovely, clean and happy Mennonite farm to take a look at their latest litter of pups.

Several of our friends scoffed at the thought that we were “just going to look” at the pups. They were all absolutely right, of course, as I guess I knew they were. We were ready. We knew we were ready and so, it would seem, did the wee beastie who rode back with us, snuggled in Leona’s arms.

The first night was pretty rough. Poor Cooper found it hard to adjust after the disorientation of his first car ride, the excitement of his strange new home, the flood of affection from his new family members, and the misery of separation from his siblings.

We’re doing the crate-training thing, which some people will tell you is cruel (often the same people who’ll angrily swat a pup on the nose when it piddles on the carpet). The books and many experts seem to agree it’s one of the best things you can do for a young dog. Try explaining that to a 9-week-old snufflehound, though. Little Coop was not a happy chap last night.

Not wanting to take him from the crate, but also unable to harden my heart entirely to his lonely whimpers, I ended up – soft idiot that I am – grabbing a sleeping bag and bedding down beside him on the hardwood. Somehow, we both survived intact and (barely) rested. At the same time, this doggie Ferberizing seems to have forged an instant bond between us, such that Cooper has hardly left my side all day.

He’s imprinted on me as deeply as I’ve fallen in love with him.

Today has been a whirl of visits from friends, romps in the garden, walkies, walkies, and more walkies. The comical little scruff has settled in beautifully, so far. Who knows what the next years will bring?

Welcome to the family, little Cooper. It’s a joy to have you here.

Oh, and in case you’re wondering, we named him Cooper for a number of reasons; one being his fuzzy-headed likeness to a certain favourite comic magician of my childhood.

[UPDATE: He slept through the second night without a peep out of him, and then did his business immediately on being taken outside at 6am. I’m inclined to think this was less a miraculous instant housebreaking epiphany and more the result of him being plain tuckered out after all yesterday’s excitement, but I’m not complaining. Good boy, Cooper!]

Mesh 09 Bootstrapping a Startup panel

This will be my fourth time speaking or moderating at mesh – Canada’s premier web conference. Four mesh events in four years, and four times they’ve invited me to participate. Seems the organizers like the cut of my “immoderate moderator” jib, which is deeply flattering and rewarding as I always have a lot of fun doing it.

This year, I’m not running a panel on any of the topics with which I’m normally associated. As the title of this post says, I’m cat-herding a group of experienced entrepreneurs as we unwrap the issues surrounding getting a startup company off the ground.

First question: why me? Well, as it happens, I have more experience in this space than might be immediately obvious. Sure, I’ve done PR and marketing consulting for a whole slew of early-stage technology companies in the past 9 years or so. I’ve helped startup clients secure a healthy chunk of “holy grail” media coverage, with a good assortment of Globe, Post, CTV, CBC, CityTV, Global and even TechCrunch, Engadget and ReadWriteWeb hits. But long before moving into the PR agency world I also had my own direct experience of bootstrapping a startup.

It’s a long story, but pretty much the whole reason I’m in Canada today goes back to 1996, when the tiny, struggling software firm my friends and I had started in the UK got bought by a much bigger Canadian systems house, who we’d just beaten in a competitive bid for some juicy NY-based business. I arrived in Toronto in the middle of February, ’96. Worked solid 18 hour days for most of that summer and closed the IPO on November 7th of the same year. Hey – it was the 90s, that’s we rolled back then.

Point is: although the market is very different these days, I think I can go into this panel session with a reasonable idea of some of the hot topics we should be exploring. But I also need your help.

One of the best things about mesh, in my experience, is the way that the discussion extends well beyond the four walls and fixed time slot of any single session. Plus, whether I’m directly involved in a session or just sitting at the back, I love it when the conversation is lively enough to erase the divide between the experts on stage and the people formerly known as the audience.

Vigorous, even heated debate, is a lot more interesting than a lot of polite consensus from a panel of even the smartest speakers – and it gets us a lot closer to understanding key questions if there’s a healthy cloud of discussion before, during and after the focal point of the session.

It’s also safe to assume that not all of the brightest and best minds on any topic will actually be in the room at the time of the panel chat (that’s one of the benefits of live-blogging and tweeting, of course).

So with all that in mind, I thought I’d kick off part of the discussion here and see if we can spill it over into the panel session next Wednesday afternoon. The panel for this slot is a terrific group of smart entrepreneurs: my old friend, Mic Berman (Embarkonit), Carol Leaman, CEO of the excellent PostRank, and Keith McSpurren, Founder & President of CoverItLive.

With the collected decades of experience this trio has to offer, the hard-won scar tissue of their years in the startup trenches, what are some of the questions you’d want me to fire at them? If you’re an early-stage entrepreneur yourself, or thinking the time is right (despite the soggy market) to finally turn your killer idea into your day job: what one thing would you most like to know from people who’ve been there, still doing that?

I’ve got a list of some initial questions worked up (below), but are these good enough to make our panellists earn your attention? Let me know what you think…

1. What are the two most important ingredients for startup success?
2. What is the most common mistake made by entrepreneurs when bootstrapping (and how do you avoid it)?
3. How do you mitigate the risks of a bootstrapped operation in the midst of recession?
4. Would you be utterly insane to launch a new startup right now?
5. Do you think Canada is a better or worse environment for startups than elsewhere?
6. Who do you turn to for your advice, support, and encouragement?
7. What one book should every founder read?
8. What online resource could you – as an entrepreneur – not live without?
9. Who are Canada’s startup heroes (and villains?)
10. If you’re so smart, why aren’t you rich already (or, if you are, can you lend me a tenner)?

Help us make this panel the most useful session on building a startup you’ll ever attend. What am I missing?


As with so many of my ideas, this could be utterly daft or genuinely interesting (or something in between). We’ll let the crowd decide…

Un-conferences, meetups and camps (BarCamp, DemoCamp, PodCamp, ChangeCamp, etc.) have become the pedestrian norm in geek and social media circles. The once-rebellious ideals of the un-conference set are drowning in the din of 50,000 cheerily whuffie-riddled, corporate sponsored echo chambers. We need to break out of this rut somehow, dammit. I’m calling for a meta-session on the future of (the future of) un-unconferences.

Call it: meta|camp™*

meta|camp will be an entirely new kind of people-formerly-known-as-the-audience collaboration experiment (in contrast to all of those other mass collaboration experiments currently clogging your over-stuffed schmoozing schedule).

Perhaps a little like the original conception of Tim O’Reilly’s Foo Camp, but even more intricately and self-consciously unstructured in falling over itself to be anarchically self-assembling and self-defining.

meta|camp would adamantly not be a series or an annual event – it would, by definition, be a one-off. This does not entirely preclude the possibility of re-runs, but only once we figure out how to add subtitles on top of subtitles to the DVD copy of the proceedings. We’re looking into the possibility of OCR’ing the screencaps.

Ingredients of meta|camp would be as follows:

  • Fill a room with conference-goers, geeks, experts, savants, neophytes, dilettantes – oh, and caterers (we need carbs!);
  • Get the contributors (not “audience”) to generate the content and direct the narrative arc of the whole session, on the fly (i.e. no sign-up board or pre-baked schedule);
  • Use the MIT Media Lab’s to “run” the session on a big screen;
  • Contributors post questions and topics in, vote on what the session should be about and ideas worth exploring;
  • Second big screen runs live tweet stream and/or group CoverItLive session;
  • Brainstorm what’s good and what’s bad about the current conference/unconference/camp/workshop/whatever world, plus ways to make things better;
  • There would have to be at least one, preferably several, catalysts (not “moderators”, panelists, speakers, or invigilators – no. The catalysts are just there to help spark discussion, trouble-shoot the tech, make off-colour remarks about the catering. Catalysts help make things happen – they don’t directly do those things and they get out of the way);
  • Everyone wears a mic (or no one does). Everyone has access to, etc.;
  • We deal with trolls by ignoring them;
  • The whole session also gets webcast live and, yes, perhaps even simulcast in Second Life (shoot me now).

What would we talk about? No agenda or set topics, except insofar as we’re there to talk about the future of (the future of) un-unconferences. So I hope we’d get into such things as:

  • Live tweeting/live blogging of conferences – useful meta information or distracting annoyance?
  • Pulling up a Twitter or IRC backchannel behind the speakers’ heads – same question.
  • Moving from monologue to true group dialogue – new ways to break the old moulds.
  • If I say the same thing as the last panelist just said, but I just frame it differently, does a tree fall in the forest?
  • Bad animations in Powerpoint – should they carry mandatory jail sentences?
  • Name badges – we don’t need no steenking badges.

Just think about the hyper-meta-lovely moebian joy of all this. To have a conference session discussing the future of conference sessions that includes a conversation on live-blogging, while some of the discussion participants are live-blogging the actual session.

Then we should have another tier of people (in the room or around the world) live-blogging the aggregated live-blog coverage. And then we feed THAT back into one of the big screens in the room and have the catalysts and contributors talk to it so that others can then comment and live-blog the commenting of the…

Of course, there’s also a very real risk we could spend the whole of meta|camp defining meta|camp (the first rule of meta|camp is…). But – don’t you see? That’s OK too. In fact, that’s almost the entire point.

I am only semi-joking about this, in case you’re wondering. Bonkers though it may seem (even to me) I really would like to stage a meta|camp at some point in the future. Even really well produced un-conferences and camps have their flaws, and I’ve whined about the conventionality (pun intended) in the past.

So let’s really do it! What the heck. Be Judy Garland to my Mickey Rooney (or… no, that’s just wrong). If we can figure out dates and a venue (and caterers, I’m all about the caterers), join me for meta|camp — where we’ll think the unthinkable, question the unquestionable, eff the ineffable and even screw the inscrutable!

[An important and necessary hat tip, btw, to the very wonderful Gary Turner who may have inadvertently seeded this idea when he meta-blogged the live-blogging of the DigitalID conference way back in 2002. You had to be there.]

*Yes, it needs to be uncapitalised. In the 2.0 world, we are all the bastard offspring of e.e. cummings. And, yes, the vertical line is part of the name. It symbolises the inextricable, inexplicable, ineluctability of the meta memetics of meta|camp and I pity the foo’ who thinks otherwise.

Analyzing my new Twitter followers

Recently, my number of new Twitter followers has been growing at an increasing rate. I’m not sure if this is simply a factor of the exponential growth of Twitter itself, or perhaps something to do with the fact that I’ve been talking about Twitter a lot more in email, phone conversations and at events. Probably some combination of these things.

Interested to see where all these new followers are coming from, I took a quick walk through the last ten days’ worth of new followers, checking out their profiles to see who these people were.

It quickly became clear that I could group all of the last few hundred followers into a set of simple categories, as this chart shows:

You probably think I’m joking. I only wish that I were (OK, well maybe a little – but I’m not exaggerating by much).

Twitter is still extraordinarily useful, vibrant, and interesting. It is, as Joe says, a great town square. But it’s clearly in danger of being overrun by… well… by all of the above. Meh.

The Machine Stops (again) #googmayharm

I’ve written about this in the past, but this morning’s short-lived global Google meltdown seems an appropriate time to repeat the thought.

For years now, I’ve been bringing up E.M. Forster’s extraordinary short story “The Machine Stops” in the context of discussions about Vannevar Bush, Ted Nelson or any conversation touching on our society’s increasing dependence on, and faith in, technology.

It seems hardly anyone has ever heard of this story. People know Forster, of course, for the obvious novels (Passage to India, Howards End, etc.) and the Merchant-Ivory movies of his work. But he was also an exceptionally gifted short story writer, on a par with O. Henry in his mastery of the concise art.

I first read The Machine Stops as part of a collection of stories that were issued as required reading in my fourth year of secondary school in England. Many years later, I found an online copy to download and re-read on my old Palm Vx. This morning’s events make me want to go back and read it once again.

In the story, Forster paints a bleak picture of a post-apocalyptic dystopia in which humanity has become so utterly dependent on technology as to be rendered completely helpless when, as the title suggests, the “Machine” that runs the world and all forms of life support, simply stops working.

Forster’s Machine has grown over time to become so big and complex that no one living person or group is able to fully grok the complex workings of the thing to start fixing it.

It would be wrong to over-dramatise this morning’s very brief Google outage as anything remotely as catastrophic, of course. But for about 20 entertaining minutes there, it seemed like people worldwide had a tiny glimpse into the fearful abyss of a world without Google (and yes, my tongue’s more than a little way into my cheek).

Being deprived of our groupmind, even for such a short time, caused an extraordinary flood of messages on Twitter. The search for Twitter hashtag #googmayharm reached 100 pages of posts (about 1500 individual tweets) in under an hour and fast overtook the Super Bowl as the hottest rising story.

As technology advances, our relative understanding decreases, and our helplessness and confusion increases,” as that Weinberger bloke once said.

Indeed. The curious thing for me is that I’m left more reassured than worried about all this.

It is precisely the inherent, defining brokenness of the Web that makes it so valuable and so useful.

When one key part (in this case Google) completely fails – however briefly – we may have a moment of panic, but we quickly learn to route around the damage. There are lots of other search engines out there still; many alternative ways to complete the synaptic connections we’ve grown accustomed to outsourcing to the great gods of Google.

We should worry less, perhaps, about what happens when a dominant provider such as Google fails, and more about what might happen if the Net ever reaches the point of working too well.

Has Google been hacked?

At some point this morning, every single Google search started bringing up linkjacked results with each result flagged like this:

Seems that every single site has now been Net Nannied into oblivion – doesn’t matter what you search for, EVERYTHING is flagged with “This site may harm your computer”.

No news out of Google as at 9:56am Eastern, nothing on the Google blog, and no response yet from the handful of people I know at Google who I’ve sent email to – but then, it is Saturday morning. Have to believe someone at the Google HQ is on this though. It seems pretty clear they’ve been hacked in some way – and it’s a hack on a huge scale.

Meanwhile, in the absence of regular media coverage, the Twitter stream is on fire. Search for #googmayharm or #googmeltdown on Twitter and follow the story as it unfolds in real time there.

This is destined to be yet another example of Twitter’s emerging importance – denied their Google lifeline, people are turning to Twitter in droves to find out what’s going on, ask questions, swap stories. It’s the global digital heartbeat of our time.

UPDATE: 10:17 est – I thought at first it was fixed. The same innocuous search for “disney” I ran above now comes up clean. Tested this – it’s still broken with other searches, but the second time you run a search it comes through OK.

UPDATE: 10:19 est – Now looks like it’s really getting fixed. I think they’re rolling the cleanup through servers and datacentres. Some searches still bust, but most are clean. Depends on which server cluster your search hits. Now just waiting to see what Google’s PR people are going to say about this. Certainly not the catastrophic digital alzheimer’s story some tweets seemed to suggest, but made for an interesting and exciting little half an hour there while we contemplated the death of our groupmind.

UPDATE: 10:27 est – Interesting… I wonder if this global Net nanny hack swept across more than just Google’s search servers. I have my blog set to auto-forward all of my new posts to my Gmail account (paranoid belt-and-braces backup). This post got flagged by Gmail as spam. That’s certainly never happened before. Was the Google hack wider than just search?

UPDATE 10:32 est – a good point made by John Minnihan (@jbminn on Twitter): I’ve been carelessly throwing around the word “hacked”, but there’s no real evidence yet to say whether this was a hack or just a cockup in updating something at Google’s servers. This could have been something like an accidental tweak of their malware filters that then rolled out through their entire back end. Curious to see what Google says.

UPDATE 10:40 est – I’ve seen a suggestion floating around Twitter that the source of the meltdown may have been server failure at, described as “Google’s outsourced malware partner”. Perhaps, but that seems a little unlikely. Would Google’s infrastructure really be so ill-designed as to allow a single point of failure to knacker their entire search operation like this? More likely, I think, that the flood of click-through traffic to (linked to from every broken search result this morning) caused the Stop Badware servers to grind to a halt after the fact.

FINAL UPDATE:Feb 2, 11:44 est – It’s a couple of days later and this Google brownout is old news now. For the sake of completeness, though, I wanted to just add one final update. As this post on the Official Google Blog states, it turns out that the source of the problem was actually a maddeningly simple human error. Looks like there’s some shared responsibility between Google and (here’s the post from StopBadware about the issue), and a little unsurprising finger-pointing going on.

Now that the dust has settled and all is once more right with the world, it’s worth noting that Google’s response was genuinely impressive here. Problems are bound to happen. Sometimes, even relatively small errors can have catastrophic results – it was a single-character coding error, for example, that ultimately led to NASA’s emergency “destructive abort” of the Mariner 1 spacecraft at a cost of many millions of dollars. The test of any individual or organization’s mettle is how they respond when things go pear-shaped.

In this case, Google caught the issue fast, diagnosed and rolled out a fix, and then owned up to the problem on their blog and in media interviews, providing full information about how they goofed. Good job. Even better, Marissa Mayer, Google’s Vice President of Search Products & User Experience, put her name to the post on the Google blog – not some junior communications staffer or anonymous spokesdrone.

The only thing I’d like to have seen them add to this would be to open that blog to comments. There was an enormous amount of online conversation about this issue, it would be great to see Google fully joining that conversation, as opposed to this uni-directional broadcast approach.

They are maintaining a list of all trackbacks to their blog post, so that all sides of the discussion get some airtime. But for an issue as big as this, I’d like to see them diving into a comment thread and addressing people’s questions and concerns in an open dialogue.

Still, a pretty solid crisis response, and one which should help mitigate any damage to their reputation from this short-lived but very high-profile issue.

IABC Toronto Social Media and the Modern Communicator

I’m back from chairing an enjoyable, lively and (I thought) really interesting panel session at tonight’s IABC professional development event, Social Media and the Modern Communicator. Many thanks to the IABC Toronto Chapter for organizing and promoting this sold-out event, and to the terrific panelists for giving generously of their time and knowledge – shout out to Mathew Ingram, Jen Evans and Boyd Neil.

Too tired to blog at length, but a quick observation and some links I promised…

First, probably the most startling moment of the evening, for me, was very early on just as we were getting warmed up. I’ve been speaking about social media at conferences, seminars and other events for nearly seven years, and I figured we must be getting way beyond the 101 level by now.

We had an audience of just over 200 professional communicators at the event tonight. In an effort to gauge the general awareness and knowledge level of the audience, I asked a couple of quick qualifying questions. Here they are, with my rough assessment of the results based on a show of hands:

1. How many people here are actively blogging?
– Approximately a dozen people, perhaps 20 at most (out of the 200)

2. How many people here are on Twitter?
– Close to 60% of the room!

This blows my mind. I know that Twitter is a heck of a lot quicker and easier to get started with than full-on blogging, and I guess it requires less commitment and close to zero tech skills, but I’m still delighted and amazed at just how many people in Toronto have caught the bug.

Hey! Shel Israel! – we got your Twitterville right here!

Is this what it’s like elsewhere? Has the growth of Twitter been as fast in other cities, or is the T.O. really as special as we like to think it is? With so many Twitter apps out there, has anyone worked up a Google Maps mashup that shows the concentration of tweets per capita in various parts of the world?

Fascinating stuff (for a complete nerd like me). I’m going to have to do some more digging around to see if this is anomalous or if it just seems that way from inside the bubble.

Meanwhile, my esteemed panelists and I dropped a number of links and tips during tonight’s session, which I promised I’d try to catalogue here. I don’t know that I captured all of them, but here are the ones I remember.

Social Media Policies
A few good examples were mentioned, including those used at Dell, IBM, and elsewhere. I’ve been collecting and bookmarking something of a list of interesting social media and corporate blogging policies for a couple of years using the Delicious social bookmarking tool. You’ll find all of these (including the Dell and IBM examples) here:

I also mentioned (with my tongue only half in my cheek) the shortest (and one of the best) HR policy manuals ever written (“Rule 1: Use Good Judgement,” etc.). I blogged about this a while back in the context of policies for corporate Twitter use, here.

Thirdly, you might be interested in a sample of one of the “online interaction” policies we’ve helped develop for our clients. You can find one in the privacy policy at the foot of the Herbal Magic site, here.

Tools for Internal Social Media
There was a good question at one point about “Twitter behind the firewall”. Our panelists rattled off a bunch of examples, probably too fast for many people to note down. Here are some applications worth checking out.

Yammer (Twitter-like internal micro-blogging, as used by Boyd’s firm) (think: Yammer, but with better admin controls and UI options. My colleague, Dave Fleet, has a great review of here)

For the technically adept, there’s also Laconica – a DIY platform to build your own Twitter-like apps (of which the best known implementation so far is at

It’s also worth mentioning, that if you want to add full-fledged blogging inside the firewall, it’s very easy to set up a WordPress installation for internal communications purposes. Works well, easy to administer, and there are a bunch of good people around who can help you get things working how you want them (including, I’m cheesily obliged to point out, a certain great firm that can offer both the design & build work and the strategic consulting help).

Recommended Reading
I asked the panelists for book recommendations and think they offered some terrific ideas. In no particular order, here are the ones I can recall us mentioning (and a couple of bonus titles we didn’t, but perhaps should have):

Here Comes Everybody – Clay Shirky

What Would Google Do? – Jeff Jarvis

Groundswell – Charlene Li and Josh Bernoff

Web Analytics: An Hour a Day – Avinash Kaushik

The Cluetrain Manifesto – Levine, Locke, Searls & Weinberger

Small Pieces Loosely Joined – David Weinberger

Everything is Miscellaneous – David Weinberger

(You notice a theme here, btw? Basically, you should just read everything UofT alum David Weinberger has ever written, including his splendid blog. Yes, he’s an old friend. Yes, I’m completely biased – but the man is a certified genius and a funny, wonderful writer)

Gonzo Marketing – Chris Locke

– Don Tapscott and Anthony Williams

And finally, a personal favourite I think a lot more people ought to read:

Ambient Findability – Peter Morville

And the last little housekeeping link in this now-not-so-short post – we mentioned the US Air Force’s “decision tree” used to determine how and when they will respond to online discussion. Dave Fleet (yes, him again) has a post on the topic here and Toronto’s favourite accordion-playing supergeek social media pioneering thriller from Manila, Joey deVilla, has a bigger, updated version of the chart, here. (Hey, Joey! I think I just made you sound like @Mike_56)

That’s all I can think of for now. Thanks again to all that attended, to the boss for some great live tweeting, and to everyone following on Twitter for splendid questions and discussion during and after.