👋 Hello World

Why I focus on technical SEO as consultant

The beginning of the year when I parted with Directlyrics, it opened up time to take on projects in the role as external consultant. Given my decade long experience with SEO, I reached out to several companies about my availability, and it became clear there is still high demand in finding organic growth. I knew from earlier experience as freelance consultant, I had to be sure to position myself correctly, or I would end up writing blog posts for a living. While nothing wrong with writing content, for me that is too operational as a professional activity, but also way too competitive as a service. There will always be someone else writing blogs, cheaper. But also operational activities have no end-date. As an agency that might be a great way to generate long term revenue (done by junior employees), but my time can only be spend once. Better to spend it wisely, and focus on high impact. And so Build was born.

The way to go was to position myself as a technical SEO consultant (I so want to link those words to my professional website, but I will behave). Many aspects are becoming more and more technical. And with my background and experience, focusing on the technical parts would be not only most interesting to me, but also as competitive advantage for myself, and for my clients. Helping me position my work in a more premium segment as consultant.

Technical SEO includes many of the advanced opportunities in the SERP by triggering rich snippets, knowledge panels, and many of the industry specific one-boxes (like Q&A results). But also site speed, optimising indexing, content discovery, reliability and even UX. All these aspects are moving parts to win more positions in the search results, and grow traffic. I explain more about this here (Dutch).

Posted by

How to optimise your favicon for Google's new organic results?

Google recently introduced website icons next to its organic search results. It comes together with an overall new search results design that blends the paid ads into the organic results even more. Paid ads look identical except for an 'Ads' (or a localised variation) label next to the website URL or breadcrumb instead of an icon.

With ads harder to distinguish, it becomes even more important to focus your energy on creating an engaging organic listing for your website. And with this new design, your website icon is an extra opportunity to stand out in the results.

Unfortunately the icons look very blurry. Not only is 16 by 16 pixels very small, but Google also doesn't support retina variants. This leads to a blurry effect. I asked Google to look into supporting larger size icons (at least 32x32).

Blurry icon YouTube
Blurry icon Wikipedia

How is the icon picked?

Google creates a 16x16 icon based on the favicon provided by your own website.

Updated 4 june 2019: more testing shows that Google actually picks a different icon for the SERP than Google Chrome or the icon used in the Google Search Console!

I found it follows the following waterfall:

Googlebot visits your root domain (including subdomain):

  1. It will pick the largest provided size from your provided: < link rel="icon" >, < link rel="apple-touch-icon" > or < link rel="shortcut" >.
  2. Sizes being equal prefers a .png or .jpg over an .ico file.
  3. If those are unavailable, it will fallback to the file hosted at domain.com/favicon.ico

Opinion: That the `apple-touch-icon` is used for the SERP (and scaled down to 16x16 pixels) is probably not what websites owners expect, and in my opinion not the correct implementation by Google. It should be in line with how the icon is picked by the browser's logic for its website tab icons.

Tip: Use the following tool to test which icon Google picks for your domain: https://build.amsterdam/icon-test.

What are good ways to optimise my icon?

There is not much data available yet, but from my anecdotal experience this is best:

  • Optimise your icon for a small size. Don't just downsize a large logo. Best it to keep the source under 64x64 pixels, and see what to put in there.
  • Keep it simple. A small letter (see Wikipedia) or icon (see Youtube) in on a bright background works best.
  • Don't keep too much white or transparant space around your icon. It will make your icon look even more tiny.

What to do if I get a blue globe next to to url?

If there is a blue globe next to your domain, that means that Google couldn't find an icon for your website. You can solve this by creating a square icon and either (1) converting it to .ico format and saving it at domain.com/favicon.ico or (2) refer to your icon from the as < link rel="icon" >. Mozilla provides a good guide on this.

For now Google only serves the icons in the search results on mobile devices only, but don't be surprised if this rolls out to desktop as well.

Posted by

Google's supervising ranking algorithm theory

How Google created a supervising algorithm to precisely control all search traffic it sends out.

I’ve been spending the past 10 years watching, analysing and interpreting the search results on a daily basis. I’ve reviewed over 10 billion clicks from a Google search to one of my (clients-) web properties.

But even with all the amount of resources on SEO available on the web, and all the time I spend, I still couldn’t explain key aspects of what I saw to identify why certain keywords made us rank (or not). Up until I approached the problem from Google’s perspective… It all started making sense to me.

Google seemed to have quietly introduced a new approach to their ranking algorithm, and told nobody. Powered by their unprecedented computational and brain power, they finally were able to find a solution to deal with three key issues in the search ecosystem: web spam, fair rankings and stability.

This article is the first public disclosure of what I think is game changing fundamentally different SEO approach to interpreting the search rankings. I introduce to you: Domain Ranking Bandwidth (DRB) powered by a previously unidentified supervising Google algorithm.

None of the existing expert surveys mention a concept close to DRB. But it is able to explain much of the “why am I (not) ranking?” questions, haunting so many marketers, business owners, growth hackers and SEO experts.

Note: This write-up is still speculative. It is hard to get confirmation from Google on algorithm topics, but I am working on disclosing the data to my theory.

Please read on, and it will all start making sense:

  1. Introduction
  2. Search, the early days
  3. A fundamental new approach
  4. Introducing: Domain Ranking Bandwidth
  5. Implications for ranking
  6. In summary

Introduction

To understand the current state of search rankings, we’ll first have to take a step back to were Google search was coming from. This is fundamental in understanding why Google needed to change its approach to ranking a big way and what it tried to solve.

Note: The timeline below is purely speculation.

Search, the early days (pre-2011)

mo searches, mo problems

Google was able to nail the ranking of websites for keywords early on in their existence. Their founders fundamental idea to use web links, as if they are citations had a proven track record in science for decades and worked perfectly well early on for the web.

The search and ranking nut seemed to be cracked. Google’s usage exploded. But with the success of their search engine, also came in the high-stakes. The traffic Google was sending, was worth real money. Big brands, local businesses, startups all the way down to web spammers — all getting addicted to the traffic, and all tried to boost their ranking.

Google’s reaction was to continuously adjust their algorithms to match the best quality results to searches. And changes they had to make. Many were rolled out, over many more years. Either focused on brands, promote diversity, quality content, localised results, fresh results, or plainly to weed out spam with (manual) penalties.

It was hard for Google to nail down the perfect mix. Especially in the ongoing battle with spam, they were up against tiresome spammers with little to lose.

But with all the algorithm changes to fix the SERP — besides different site rankings — their side-effects were hurting the company:

  • Bad PR. With every algorithm change, some mom-and-pop store got booted out of Google, leading to headlines of Google killing small businesses. Whatever the reason was, newspapers enjoyed writing about the narrative, and sites being penalised were actively pursuing the media so Google might undo their penalty. But also larger brands (BMW, JC Penny, Forbes) started demanding for Google to rank their sites and make a big stir when they weren’t.
    How could algorithm roll-outs (+ penalties) exists but maintain a stable SERP and not trigger negative PR?
  • Peak-Wikipedia. Certain properties — with as flagship example Wikipedia — had amassed so much authority and trust on a wide range of topics it was almost impossible not to see them rank in the top 3 slots for a search query. Especially commercial companies (Associated Content, Yahoo, Demand Media, etc.) saw their opportunity and used their authority to grab as much of traffic from Google by churning out content on any topic, and turned this into millions of ad revenue. A limited number of properties were too good, and took too much of the search traffic pie.
    How would the SERP continue to have enough variation and traffic spread more ‘fair’ over the web?
  • ∞-Spam. Spamming the web had little to no costs. Literally. Weeding out one spammer, only made them move on and change tactics. Google’s compute & workforce would always be outnumbered in fighting the spammers because the spammer had only upside.
    How could Google increase the spammer costs?

Many more updates followed: Penguin, Panda, etc. All big thrives forward, but these didn’t solve the core issues Google faced with the search results: web spam, fair rankings and stability.

A fundamental new approach

precise and total control

By now — 2012 — Google became much more capable with a huge investment in compute power and workforce. For example it wasn’t serving a fixed set of results pages anymore for search keywords. The results became device-specific, localised, a/b-tested and personalised. Thousands of variations for a single keyword. Billions for keyword combinations.

Also Google had now the ability to continuously update it’s web index, go beyond just a top-10 by introducing specific one-box results, and roll-out changes with more granular controls and even disclosed many of them publicly.

To understand what happened next, let’s go back to the question on how to increase the spammers costs? Google’s web spam head Matt Cutts answer wants to start wasting the only limited resource available to the web spammers: time. Instead of kicking spamming domains out of the results, Google would from now on adjust the amount of traffic that would be allocated to the spammers domains on a downwards slope. Frustrating their efforts.

This resulted in spamming domains getting less and less total traffic over time. Not knowing they got flagged, slowly losing rankings overall.

While some individual ranking gains maintained to keep spammers wasting their time, the overall traffic was on a downward slope on its way to 0 in x amount of time.

Google had time and saw what it gained by not triggering the spammers attention after an algorithm change caught up with them. It would be frustrating and confusing the spammers, while wasting their time. Google’s great find.

This great find was then followed by a big EUREKA moment when Google’s engineers understood this could be applied to every website on the web.

With all the big data gathered from search, Google started taking precise and total control of all the traffic they sent out:

Google started allocating website domains a pre-defined search traffic bandwidth. All domain rankings in aggregate equal the total traffic allowed.

I call this the Domain Ranking Bandwidth (DRB). A supervising algorithm controlling all the traffic the search engine sends out to every domain. The algorithm allocates a pre-defined traffic bandwidth. The bandwidth consists of a lower and upper bounds allocated based on its authority and in which traffic is then assigned to you, through a specific position in the search results.

All individual domain rankings are together contributing to a pre-defined allocated bandwidth range.

The bandwidth behaves similar to how company stock price charts are analysed by analyst. To make future predictions on traffic (or price as with stock) an upper and lower bound is drawn in which the traffic fluctuates over time, but doesn’t break out. Every single ranking position of a domain translates to a certain amount of traffic, which in total falls within the domain's allocated bandwidth.

Google can predict the search volume for any keyword, the resulting traffic for each position, and for every variation of its search results.

Google's new algorithms are set to only adjust the angle of your domain ranking bandwidth over time.

One individual jump in rankings for a keyword caused by an underlying algorithm, will lead to more traffic, but if that causes the domain to break out of its upper bound, the supervising DRB algorithm will tweak other ranking positions a bit down, by applying a lower average positions to other keyword rankings, enough for the individual jump to keep the domain within bounds.

This strategy of allocating a bandwidth resolved the core search ranking issues Google faced :

  • Fairness.With every domain within its bandwidth, even Wikipedia, Yahoo! and Google itself, no single property could become all encompassing dominant. While an underlying algorithm might decide Wikipedia is the best to rank for a search query, the supervising DRB can determine Wikipedia is already at its upper bound in traffic, giving other websites the opportunity to take that position and traffic, or slightly adjust other Wikipedia rankings downwards.
  • Stability. No more big jumps leading to bad PR or frustrated webmasters. Search rankings have become boring and predicable(ish). Large e-commerce sites are able to predict their revenue (and invest it into Adwords ;)). The stability leads to an overall healthier ecosystem for Google, where everybody is incentivised to continue investing in (fresh) content over a longer period.
  • Spam. Spammers are identified and will head towards zero by adjusting only the slope of their bandwidth (strongly) downwards. Spammers continue to see rankings, but not knowing that their efforts are futile. The downside of this approach is that spam sites are still ranking sometimes, but slowly-and-steady they do disappear from the top results. With the spammer wasting their time on the thing they though (still) worked, instead of moving on to a new spam tactic.
  • Blackbox. As an extra, it makes it almost impossible to reserve engineer the rankings algorithm. Given many (sub-)optimal rankings , or shifts (+1 , -5) are based on the domain ranking bandwidth and not the site’s actions, content, linking, actions, etc. a single change has different outcomes. E.g. you could be doing amazing SEO wise, deserve lots of #1 positions, except for your bandwidth is fully filled, you are throttled by your upper bound. Then you can only just wait, until more and more rankings are slowly gained to match slowly increasing upper bound.

Implications for ranking

The domain ranking bandwidth has major implications for websites ranking in Google.

Chasing individual wins is futile if the overall DRB angle is flat (no growth). Obtaining individual ranking changes are not a tell-tale sign of you doing the right things, without taking into account your whole property’s DRB slope direction.

This is especially applies when you are already at the upper bound of the bandwidth:

A big win for a keyword you have been chasing, does not correlate to more total traffic. The DRB steps in, and the individual keyword win leads to an equal matching loss but masked by being spread over many other (average) rankings being dialed down.

Next to that, when dealing with rankings:

  • Charts. Daily charts lose their meaning. They miss the context of the long term direction.
  • Tracking. Individual keyword trackers are irrelevant.
  • Complexity. Decisions have to be made whether a win for one keyword, is worth the matching potential loss for others.
  • Wins. Seeing a #4 jump to #1 is fun but irrelevant, until it’s matched with an upwards slope of total traffic over a longer period.
  • Featured listing. A big win as a one-box as featured snippet, knowledge pain, “in the news” won’t scale for (all) other keywords, given it is merely a side-effect of you being chosen to fill the DRB’s needs.

A real-life chart example is the property below. The organic traffic for a period of 8 months. The bandwidth slope is obviously downwards. Some rankings are gained but quickly throttled down to match the downwards slope (the peaks).

Variations occur on daily, weekly basis, but overall all ranking activity occurs within the DRB.

In summary:

  • Google allocates traffic to your domain using fine-grained control mechanisms.
  • Your website traffic from Google is bound to a pre-defined bandwidth (DRB).
  • The bandwidth only changes its slope slightly by your actions or new algorithm adjustments.
  • Google plays a long game, you need to align as well.
  • Stop interpreting a single keyword ranking, start using a broad holistic approach by looking at total traffic retrieved.
  • If you are at the upper bound of your DRB, any win equals a matching loss (spread over multiple rankings).
  • If you see lots of variation in traffic, good chance you have not reached your upper bound yet or you timeframe is set too small(< 14 days).

Continue to read part II (work in progress) which provides you a guide on how to identify your domain ranking bandwidth bounds, how to start thinking in long-term slope adjustments and help you read your existing rankings within this new framework and how to find a strategy to grow your traffic by interpreting the rankings within the DRB framework.

In the mean time follow me on Twitter @yvoschaap for when I post updates to uncover this topic further.


I am no way affiliated with Google Inc. and all statements are (educated) guesses.

Posted by

My end of year

The end of year again! Time for some semi-public reflection. First of all I've been neglecting my own blog, which for the record has been in this form since 2005 (and funnily enough started also with an end of year review). Which is kind of a waste given a blog is a great way to commit your thoughts.

I won't make any promises I might not keep, especially at a time of new years resolutions. But I want to share more here, and especially less based on me only checking in to promote some new launch (for which I've been abusing the blog now for a while). I like Seth's idea of just showing up which he re-iterates in his new book This is marketing (nice read). But I have not been doing that for a while...

I have mixed feelings for this year, but overal 2018 was a year with a good personal & professional focus (I think I know by now what makes me happy), but maybe not much luck with results (given my entrepreneurial KPIs are way 📉).

I launched Mailbook (which gets almost daily happy testimonials), my son went to school (and is very happy there), I totally enjoy development with React now, my daughter is potty trained at 2.5, and I launched several new projects under the build.amsterdam concept, and found a new (possible) partner along the way (hi Marco). And of course, our family enjoyed a healthy year (needs a mention when you pass the 25 I guess)

But the new year starts with a sad decision: I decided to stop operating Directlyrics (after over 11 years and 1 billion (!) page views). I've seen and enjoyed the high tide of lyrics, ringtones, licensing, ads and seo, but also the downsides of low value content, decreasing CPMs, increasing costs, and complexer competition. Read more in this IndieHacker post 👉.

I wasn't able to re-invent the project, didn't surprise users and stuck too long to SEO as acquisition strategy. But also to be fair, not from a lack of trying (lots of things were tried). In this case lyrics are a high traffic business and without the traffic (e.g. under 100k a day is unmaintainable) the revenue isn't enough to continue. Kevin (the editor) was amazing over the last decade. Hire him.

But this makes room for 2019 to let go of old habits, and start of fresh, finding new fuel and route to succes (defined as... ). Less lurking (bye Twitter), more actionable days (Hello OKRs) and even more focus (oops, this now does sound like a new years resolution).

I'll be enjoying the Swiss mountains the last few days. I'll keep you updated :)

Posted by

Finding product/market fit with Mailbook

Beginning this year (2018) me, Bram and Emiel decided to build a cool side project. Bram was then struggling with collecting addresses from friends to sent out an upcoming birth announcement, so he came up with the idea that we named Mailbook. After a user signs-up, he retrieves a personal link, shares that with his friends, friends add their address and the addresses appear neatly in their personal address book. Simple. And inherit viral, because every new user sends a link to Mailbook to on average over 50 other people (in the same demographic).

In 6 months over 40,000 people added their addresses to a Mailbook. Some users collected over 200 addresses in under two days. And growth only seems to accelerate. Very cool to see.

I continue to work on it, empowered by the positive feedback and incoming feature requests. Most recently I released a quick way to create and print address labels. Compare my solution with the horrors build by Word + Excel in this Dutch article adresetiketten printen.

But the latest change: an English version of Mailbook 🎉. Not only good for my Dutch users with international friends, but also to expand the reach of the product.

So are you expecting a baby anytime soon, or planning a wedding: start collecting addresses.

Meer uitgelegd op deze gemakkelijk adressen verzamelen voor geboortekaartjes pagina.

Posted by

Latest published articles:

Below the latest articled I published in several publications:

Posted by

The Economist Explorer

I've recently posted two articles on Medium:

I'll maintain this blog and continue with updates, but Medium has evolved to a very useful tool for publishing.

Posted by

My own multi-room audio setup

I always liked the idea of having a Sonos-like multi-room setup in my house. The same tune near my dining table, in the kitchen or even outside. But I don't want to create another audio system next to my fine tuned hi-fi stereo setup. I already support Airplay, Bluetooth, Spotify Connect, etc. through my high-end AV receiver. But that audio setup is only perfect for my living room area.

Physically wiring up other rooms by creating extended zones doesn't feel like a flexible solution. Wires suck, so wireless, right? Yes, but now we're stuck with its latency that will interfere with the existing wired setup unless you like echoes or a 2 second delay (looking at you Airplay). The scope of my wants:

  • Multiroom speakers which are in sync
  • Existing receiver as audio source
  • No wires throughout the house
  • Amplified speaker, preferable portable
  • Relative high quality output
  • Control volume local and central
  • Stereo (optional)

Most existing wireless audio solutions introduce a hub setup, where a piece of hardware only job is to sync up the hooked up speakers. Every brand comes has it's own solution. My issue with those is that they introduce an (expensive) piece of hardware, which actually introduce latency from the source. There must be a better way.

The solution lies with Bluetooth. Apparently Bluetooth - of all technologies - has improved significantly over the years, and can now feature an audio codec focused on low-latency audio transmission: aptX. To be more specific, although confusing, the newly launched aptX low-latency variation. Without making this sound like an ad, it's actually kinda cool: it specs at 32ms latency end-to-end while keeping a 'near CD quality' signal (352Kbit/s).

Compared to Sonos prices, I can now create a relative low cost custom wireless solution, which extends my existing hi-fi. Bringing in a quality Bluetooth speaker (or any amplified portable speaker or soundbar) and a $50 USD Bluetooth transmitter.

Unfortunately, currently there are only a limited number of these low-latency speakers available that support aptX. I did find a match with B&O's Beoplay S3. Released recently, but packed with a nice Class D amp and beautiful design. Pricey yes, but compared to what's available, this matched the best. The sound feels big for its relative small size, and it is nicely tuned.

After getting everything hooked up, the real test came. Was Bluetooth aptX low latency the solution to my wants? How is the experience throughout the house? Happily I can say it sounds great. Honestly I did do a audio sync between the two speakers. But the setup is easy to understand, and very flexible.

I now own only one speaker, but it also has a stereo mode when paired up with another S3.

Tech stack:

  • Miccus Mini-jack TX4: AptX low-latency capable Bluetooth transmitter with an audio-in.
  • B&O Beoplay S3
  • Denon X4100W receiver
  • Kef LS50
  • Spotify app, with Spotify Connect
  • Denon app

Limitations:

  • Bluetooth's 10 meter open range and pairing.
Posted by

Recent Product Hacks

I've always enjoyed launching small product hacks. Building something from scratch gives a great rush. Hacking together API's, backend code, new functions and responsive layouts. A hackers mentality sets aside any sensible doubts which could lead to the trap of overthinking, while also challenging the brain to kick into a creative thinking and problem solving mode . Some of these hacks became timeless successes, others vanish into a folder on my PC.

Below a few recent projects I launched:

  • With the power of the Google Analytics's Real Time Reporting API I created a beautiful search activity visualization. A remake of Google's own live search presentation, but made for any site running Google Analytics. It's listed in the Google Analytics app gallery.
  • Producthunt.com leaderboard, no API so I'm crawling the site daily. Setup to get insights into whats going on in the Valley. It provides an overall overview of the activity on Producthunt, with a focus on the hunters as well as the products. I also found out I got so much data points, I can easily flag dubious voting going on. Upvote.
  • I really enjoy YouTube. But it seems they go out of their way to make it difficult to just continuously play a user channel's as one big playlist. Point in case, Majestic Casual, a video channel with over 1.7M followers, which serves a great selection of music to play in the background. So I created the unofficial majesticcasual.tv, which does what I want and looks nice too.
  • Marc Andreessen tweet essays fixes the readability of the thoughts of well-known investor Marc Andreessen which are spread out over multiple tweets on Twitter. @pmarca is known to tweet up to a dozen tweets - a tweetstorm -, but Twitter has no good way to group and read them. I even found they roll up his intermediate tweets in the stream. The project retrieved very positive high profile attention on ProductHunt and on Twitter. Really cool to see the people I follow to also mention this project on Twitter.
  • Solved a annoying issue where the native filters on iPhone photos can't be accessed on a Windows computer because the filter is only part of the meta data. The previous solution was to email myself the photo. A better solution is this crazy hack that uses the Dropbox API. I auto sync with Dropbox and this way I just select which photos I want to save back into Dropbox with the filter baked in them. Magic!
  • And my latest hack is Citytrip which collects recommendations from Airbnb hosts and lists them on a map. Great for discovering local places to eat and drink. More details on the Producthunt listing.

All of these projects scratch a personal itch, are developed in a handful of hours, and more important were fun to make and able to capture some of the web's attention.

Posted by

Product Hunt hunters analyzed

Exploring the insiders in the startup scene

Only live to the public for 100 days, Product Hunt is already bolstering an impressive affluent user base. Launched initially as a 20-minute MVP by Ryan Hoover (ex-PlayHaven, Startup Edition) and further developed into an actual product together with Nathan Bashaw (General Assembly), its user engagement and traction show they are on to something.

Their invite-only system has grown a user base featuring a who’s-who of the startup tech scene. Well known investors, founders, journalists, and developers are all hanging around.

My everlasting interest into the startup scene sparked me to look more close into these so-called tech insiders. A quick crawl of the site pulled in all 636 contributors, whom posted a total of 1571 products since October ’13.

By matching up their Twitter username with the Twitter API, I was able to pull in more profile details like location and Twitter reach.

Leaderboard

Instead of doing a lenghty write up of my findings, I've coded up a leaderboard that will update every 12 hours.

I ranked the most influental people who contributed at least one product on Product Hunt by their Twitter follower count:

Ashton Kutcher doer 15,674,053
Kevin Rose Partner, Google Ventures 1,447,113
Tristan Walker 2,79,690
Brad Feld Managing Director, Foundry Group 162,671
Johnny Shahidi Co-Founder, Shots 141,352
Hiten Shah Founder, KISSmetrics 141,192
MG Siegler 139,271
Nihal Fares Co-founder & Chief Product at Eventtus 100,937
Fred Oliveira Head of Product, Disruption Corp 59,428
Hunter Walk Partner, Homebrew 57,575

If we look at the site's activity, these hunters have most reach:

Ryan Hoover Product Hunt / Tradecraft 948 #77
Murat Mutlu Co-Founder, Marvelapp 653 #44
Jonno Riekwel Product designer, Jonnotie 438 #48
Dave Ambrose Venture Investor 353 #38
Kevin William David CEO,WalletKit 342 #42
Adam Kazwell Product Manager 342 #40
Robert Shedd Founder 334 #34
Geoffrey Weg Independent 313 #23
Nathan Bashaw Product Manager at General Assembly 298 #37
James Mundy Founder, Foundbite 277 #2

Follow all these hunters with this Twitter list. If there is an interest I'll add some more features to the leaderboard.

Posted by

1 2 3 4 5