👋 Hello World

Google's supervising ranking algorithm exposed

How Google created a supervising algorithm to precisely control all search traffic it sends out.

I’ve been spending the past 10 years watching, analysing and interpreting the search results on a daily basis. I’ve reviewed over 10 billion clicks from a Google search to one of my (clients-) web properties.

But even with all the amount of resources on SEO available on the web, and all the time I spend, I still couldn’t explain key aspects of what I saw to identify why certain keywords made us rank (or not). Up until I approached the problem from Google’s perspective… It all started making sense to me.

Google seemed to have quietly introduced a new approach to their ranking algorithm, and told nobody. Powered by their unprecedented computational and brain power, they finally were able to find a solution to deal with three key issues in the search ecosystem: web spam, fair rankings and stability.

This article is the first public disclosure of what I think is game changing fundamentally different SEO approach to interpreting the search rankings. I introduce to you: Domain Ranking Bandwidth (DRB) powered by a previously unidentified supervising Google algorithm.

None of the existing expert surveys mention a concept close to DRB. But it is able to explain much of the “why am I (not) ranking?” questions, haunting so many marketers, business owners, growth hackers and SEO experts.

Please read on, and it will all start making sense:

  1. Introduction
  2. Search, the early days
  3. A fundamental new approach
  4. Introducing: Domain Ranking Bandwidth
  5. Implications for ranking
  6. In summary

Introduction

To understand the current state of search rankings, we’ll first have to take a step back to were Google search was coming wrong. This is fundamental in understanding why Google needed to change its approach to ranking a big way and what it tried to solve.

Search, the early days (pre-2011)

mo searches, mo problems

Google was able to nail the ranking of websites for keywords early on in their existence. Their founders fundamental idea to use web links, as if they are citations had a proven track record in science for decades and worked perfectly well early on for the web.

The search and ranking nut seemed to be cracked. Google’s usage exploded. But with the success of their search engine, also came in the high-stakes. The traffic Google was sending, was worth real money. Big brands, local businesses, startups all the way down to web spammers — all getting addicted to the traffic, and all tried to boost their ranking.

Google’s reaction was to continuously adjust their algorithms to match the best quality results to searches. And changes they had to make. Many were rolled out, over many more years. Either focused on brands, promote diversity, quality content, localised results, fresh results, or plainly to weed out spam with (manual) penalties.

It was hard for Google to nail down the perfect mix. Especially in the ongoing battle with spam, they were up against tiresome spammers with little to lose.

But with all the algorithm changes to fix the SERP — besides different site rankings — their side-effects were hurting the company:

  • Bad PR. With every algorithm change, some mom-and-pop store got booted out of Google, leading to headlines of Google killing small businesses. Whatever the reason was, newspapers enjoyed writing about the narrative, and sites being penalised were actively pursuing the media so Google might undo their penalty. But also larger brands (BMW, JC Penny, Forbes) started demanding for Google to rank their sites and make a big stir when they weren’t.
    How could algorithm roll-outs (+ penalties) be combined with a a stable SERP and not trigger negative PR?
  • Peak-Wikipedia. Certain properties — with as flagship example Wikipedia — had amassed so much authority and trust on a wide range of topics it was almost impossible not to see them rank in the top 3 slots for a search query. Especially commercial companies (Associated Content, Yahoo, Demand Media, etc.) saw their opportunity and used their authority to grab as much of traffic from Google by churning out content on any topic, and turned this into millions of ad revenue. A limited number of properties were too good, and took too much of the search traffic pie.
    How would the SERP have continue to have enough variation and traffic spread more ‘fair’ over the web?
  • ∞-Spam. Spamming the web had little to no costs. Literally. Weeding out one spammer, only made them move on and change tactics. Google’s compute & workforce would always be outnumbered in fighting the spammers because the spammer had only upside.
    How could Google increase the spammer costs?

Many more updates followed: Penguin, Panda, etc. All big thrives forward, but these didn’t solve the core issues Google faced with the search results: web spam, fair rankings and stability.

A fundamental new approach

precise and total control

By now — 2012 — Google became much more capable with a huge investment in compute power and workforce. For example it wasn’t serving a fixed set of results pages anymore for search keywords. The results became device-specific, localised, a/b-tested and personalised. Thousands of variations for a single keyword. Billions for keyword combinations.

Also Google had now the ability to continuously update it’s web index, go beyond just a top-10 by introducing specific one-box results, and roll-out changes with more granular controls and even disclosed many of them publicly.

To understand what happened next, let’s go back to the question on how to increase the spammers costs? Google’s web spam head Matt Cutts answer was to start wasting the only limited resource available to the web spammers: time. Instead of kicking spamming domains out of the results, Google would from now on adjust the amount of traffic that would be allocated to the spammers domains on a downwards slope. Frustrating their efforts.

This resulted in spamming domains getting less and less total traffic over time. Not knowing they got flagged, slowly losing rankings overall.

While some individual ranking gains maintained to keep spammers wasting their time, the overall traffic was on a downward slope on its way to 0 in x amount of time.

Google had time and saw what it gained by not triggering the spammers attention after an algorithm change caught up with them. It would be frustrating and confusing the spammers, while wasting their time. Google’s great find.

This great find was then followed by a big EUREKA moment when Google’s engineers understood this could be applied to every website on the web.

With all the big data gathered from search Google started taking precise and total control of all the traffic they sent out:

Google started allocating websites a pre-defined search traffic bandwidth. All website rankings in aggregate equal the total traffic allowed.

I call this the Domain Ranking Bandwidth (DRB). A supervising algorithm controlling all the traffic the search engine sends out to every website. The algorithm allocates a pre-defined traffic bandwidth. The bandwidth consists of a lower and upper bounds allocated based on its authority and in which traffic is then assigned to you, through a specific position in the search results.

All individual website rankings are together contributing to a pre-defined allocated bandwidth range.

The bandwidth behaves similar to how company stock price charts are analysed by analyst. To make future predictions on traffic (or price as with stock) an upper and lower bound is drawn in which the traffic fluctuates overt time, but doesn’t break out. Every single ranking position of a website, translates to a certain amount of traffic, which in total falls within your allocated bandwidth.

Google can predict the search volume for any keyword, the resulting traffic for each position, and for every variation of its search results.

Google new algorithms are set to only adjust the angle of your domain ranking bandwidth over time.

One individual jump in rankings for a keyword caused by an underlying algorithm, will lead to more traffic, but if that causes the website to break out of its upper bound, the supervising DRB algorithm will tweak other ranking positions a bit down, by applying a lower average positions to other keyword rankings, enough for the jump to lead stay within bounds.

This strategy of allocating a bandwidth the core search ranking issues Google faced :

  • Fairness.With every website within its bandwidth, even Wikipedia, Yahoo! and Google itself, no single property could become all encompassing dominant. While an underlying algorithm might decide Wikipedia is the best to rank for a search query, the supervising DRB can determine Wikipedia is already at its upper bound in traffic, given other websites the opportunity to take that traffic, or slight adjusting other ranking Wikipedia had downwards.
  • Stability. No more big jumps leading to bad PR or frustrated webmasters. Search rankings have become boring and predicable(ish). Large e-commerce sites are able to predict their revenue (and invest it into Adwords ;)). The stability leads to an overall healthier ecosystem for Google, where everybody is incentivised to continue investing in (fresh) content over a longer period.
  • Spam. Spammers are identified and will head towards zero by adjusting only the slope of their bandwidth (strongly) downwards. Spammers continue to see rankings, but not knowing that their efforts are futile. The downside of this approach is that spam sites are still ranking sometimes, but they slow-and-steady they do disappear. With the spammer wasting their time on the thing they though (still) worked, instead of moving on to a new spam tactic.
  • Blackbox. As an extra, it makes it almost impossible to reserve engineer the rankings algorithm. Given many (sub-)optimal rankings , or shifts (+1 , -5) are based on the domain ranking bandwidth and not the site’s actions, content, linking, actions, etc. a single change has different outcomes. E.g. you could be doing amazing SEO wise, deserve lots of #1 positions, except for your bandwidth is fully filled, you are throttled by your upper bound. Then you can only just wait, until more and more rankings are slowly gained to match slowly increasing upper bound.

Implications for ranking

The domain ranking bandwidth has major implications for websites ranking in Google.

Chasing individual wins is futile if the overall DRB angle is flat (no growth). Obtaining individual ranking changes are not a tell-tale sign of you going in the right direction, without taking into account your whole property’s direction.

This is especially applies when you are already at the upper bound of the bandwidth:

A big win for a keyword you have been chasing, does not correlate to more total traffic. The DRB steps in, and the individual keyword win leads to an equal matching loss but masked by being spread over many other (average) rankings being dialed down.

Next to that, when dealing with rankings:

  • Charts. Daily charts lose their meaning. They miss the context of the long term direction.
  • Tracking. Individual keyword trackers are irrelevant.
  • Complexity. Decisions have to be made whether a win for one keyword, is worth the matching potential loss for others.
  • Wins. Seeing a #4 jump to #1 is fun but irrelevant, until it’s matched with an upwards slope of total traffic over a longer period.
  • Featured listing. A big win as a one-box as featured snippet, knowledge pain, “in the news” won’t scale for (all) other keywords, given it is merely a side-effect of you being chosen to fill the DRB’s needs.

A real-life chart example is the property below. The organic traffic for a period of 8 months. The bandwidth slope is obviously downwards. Some rankings are gained but quickly throttled down to match the downwards slope (the peaks).

Variations occur on daily, weekly basis, but overall all ranking activity occurs within the DRB.

In summary:

  • Google allocates traffic to your domain using fine-grained control mechanisms.
  • Your website traffic from Google is bound to a pre-defined bandwidth (DRB).
  • The bandwidth only changes its slope slightly by your actions or new algorithm adjustments.
  • Google plays a long game, you need to align as well.
  • Stop interpreting a single keyword ranking, start using a broad holistic approach.
  • If you are at the upper bound of your DRB, any win equals a matching loss (spread over multiple rankings).
  • If you see lots of variation in traffic, good chance you have not reached your upper bound yet or you timeframe is set too small(< 14 days).

Continue to read part II (work in progress) which provides you a guide on how to identify your domain ranking bandwidth bounds, how to start thinking in long-term slope adjustments and help you read your existing rankings within this new framework and how to find a strategy to grow your traffic by interpreting the rankings within the DRB framework.

In the mean time follow me on Twitter @yvoschaap for when I post updates to uncover this topic further.


I am no way affiliated with Google Inc. and all statements are (educated) guesses.

Posted by

My end of year

The end of year again! Time for some semi-public reflection. First of all I've been neglecting my own blog, which for the record has been in this form since 2005 (and funnily enough started also with an end of year review). Which is kind of a waste given a blog is a great way to commit your thoughts.

I won't make any promises I might not keep, especially at a time of new years resolutions. But I want to share more here, and especially less based on me only checking in to promote some new launch (for which I've been abusing the blog now for a while). I like Seth's idea of just showing up which he re-iterates in his new book This is marketing (nice read). But I have not been doing that for a while...

I have mixed feelings for this year, but overal 2018 was a year with a good personal & professional focus (I think I know by now what makes me happy), but maybe not much luck with results (given my entrepreneurial KPIs are way 📉).

I launched Mailbook (which gets almost daily happy testimonials), my son went to school (and is very happy there), I totally enjoy development with React now, my daughter is potty trained at 2.5, and I launched several new projects under the build.amsterdam concept, and found a new (possible) partner along the way (hi Marco). And of course, our family enjoyed a healthy year (needs a mention when you pass the 25 I guess)

But the new year starts with a sad decision: I decided to stop operating Directlyrics (after over 11 years and 1 billion (!) page views). I've seen and enjoyed the high tide of lyrics, ringtones, licensing, ads and seo, but also the downsides of low value content, decreasing CPMs, increasing costs, and complexer competition. Read more in this IndieHacker post 👉.

I wasn't able to re-invent the project, didn't surprise users and stuck too long to SEO as acquisition strategy. But also to be fair, not from a lack of trying (lots of things were tried). In this case lyrics are a high traffic business and without the traffic (e.g. under 100k a day is unmaintainable) the revenue isn't enough to continue. Kevin (the editor) was amazing over the last decade. Hire him.

But this makes room for 2019 to let go of old habits, and start of fresh, finding new fuel and route to succes (defined as... ). Less lurking (bye Twitter), more actionable days (Hello OKRs) and even more focus (oops, this now does sound like a new years resolution).

I'll be enjoying the Swiss mountains the last few days. I'll keep you updated :)

Posted by

Finding product/market fit with Mailbook

Beginning this year (2018) me, Bram and Emiel decided to build a cool side project. Bram was then struggling with collecting addresses from friends to sent out an upcoming birth announcement, so he came up with the idea that we named Mailbook. After a user signs-up, he retrieves a personal link, shares that with his friends, friends add their address and the addresses appear neatly in their personal address book. Simple. And inherit viral, because every new user sends a link to Mailbook to on average over 50 other people (in the same demographic).

In 6 months over 40,000 people added their addresses to a Mailbook. Some users collected over 200 addresses in under two days. And growth only seems to accelerate. Very cool to see.

I continue to work on it, empowered by the positive feedback and incoming feature requests. Most recently I released a quick way to create and print address labels. Compare my solution with the horrors build by Word + Excel in this Dutch article adresetiketten printen.

But the latest change: an English version of Mailbook 🎉. Not only good for my Dutch users with international friends, but also to expand the reach of the product.

So are you expecting a baby anytime soon, or planning a wedding: start collecting addresses.

Meer uitgelegd op deze gemakkelijk adressen verzamelen voor geboortekaartjes pagina.

Posted by

Latest published articles:

Below the latest articled I published in several publications:

Posted by

The Economist Explorer

I've recently posted two articles on Medium:

I'll maintain this blog and continue with updates, but Medium has evolved to a very useful tool for publishing.

Posted by

My own multi-room audio setup

I always liked the idea of having a Sonos-like multi-room setup in my house. The same tune near my dining table, in the kitchen or even outside. But I don't want to create another audio system next to my fine tuned hi-fi stereo setup. I already support Airplay, Bluetooth, Spotify Connect, etc. through my high-end AV receiver. But that audio setup is only perfect for my living room area.

Physically wiring up other rooms by creating extended zones doesn't feel like a flexible solution. Wires suck, so wireless, right? Yes, but now we're stuck with its latency that will interfere with the existing wired setup unless you like echoes or a 2 second delay (looking at you Airplay). The scope of my wants:

  • Multiroom speakers which are in sync
  • Existing receiver as audio source
  • No wires throughout the house
  • Amplified speaker, preferable portable
  • Relative high quality output
  • Control volume local and central
  • Stereo (optional)

Most existing wireless audio solutions introduce a hub setup, where a piece of hardware only job is to sync up the hooked up speakers. Every brand comes has it's own solution. My issue with those is that they introduce an (expensive) piece of hardware, which actually introduce latency from the source. There must be a better way.

The solution lies with Bluetooth. Apparently Bluetooth - of all technologies - has improved significantly over the years, and can now feature an audio codec focused on low-latency audio transmission: aptX. To be more specific, although confusing, the newly launched aptX low-latency variation. Without making this sound like an ad, it's actually kinda cool: it specs at 32ms latency end-to-end while keeping a 'near CD quality' signal (352Kbit/s).

Compared to Sonos prices, I can now create a relative low cost custom wireless solution, which extends my existing hi-fi. Bringing in a quality Bluetooth speaker (or any amplified portable speaker or soundbar) and a $50 USD Bluetooth transmitter.

Unfortunately, currently there are only a limited number of these low-latency speakers available that support aptX. I did find a match with B&O's Beoplay S3. Released recently, but packed with a nice Class D amp and beautiful design. Pricey yes, but compared to what's available, this matched the best. The sound feels big for its relative small size, and it is nicely tuned.

After getting everything hooked up, the real test came. Was Bluetooth aptX low latency the solution to my wants? How is the experience throughout the house? Happily I can say it sounds great. Honestly I did do a audio sync between the two speakers. But the setup is easy to understand, and very flexible.

I now own only one speaker, but it also has a stereo mode when paired up with another S3.

Tech stack:

  • Miccus Mini-jack TX4: AptX low-latency capable Bluetooth transmitter with an audio-in.
  • B&O Beoplay S3
  • Denon X4100W receiver
  • Kef LS50
  • Spotify app, with Spotify Connect
  • Denon app

Limitations:

  • Bluetooth's 10 meter open range and pairing.
Posted by

Recent Product Hacks

I've always enjoyed launching small product hacks. Building something from scratch gives a great rush. Hacking together API's, backend code, new functions and responsive layouts. A hackers mentality sets aside any sensible doubts which could lead to the trap of overthinking, while also challenging the brain to kick into a creative thinking and problem solving mode . Some of these hacks became timeless successes, others vanish into a folder on my PC.

Below a few recent projects I launched:

  • With the power of the Google Analytics's Real Time Reporting API I created a beautiful search activity visualization. A remake of Google's own live search presentation, but made for any site running Google Analytics. It's listed in the Google Analytics app gallery.
  • Producthunt.com leaderboard, no API so I'm crawling the site daily. Setup to get insights into whats going on in the Valley. It provides an overall overview of the activity on Producthunt, with a focus on the hunters as well as the products. I also found out I got so much data points, I can easily flag dubious voting going on. Upvote.
  • I really enjoy YouTube. But it seems they go out of their way to make it difficult to just continuously play a user channel's as one big playlist. Point in case, Majestic Casual, a video channel with over 1.7M followers, which serves a great selection of music to play in the background. So I created the unofficial majesticcasual.tv, which does what I want and looks nice too.
  • Marc Andreessen tweet essays fixes the readability of the thoughts of well-known investor Marc Andreessen which are spread out over multiple tweets on Twitter. @pmarca is known to tweet up to a dozen tweets - a tweetstorm -, but Twitter has no good way to group and read them. I even found they roll up his intermediate tweets in the stream. The project retrieved very positive high profile attention on ProductHunt and on Twitter. Really cool to see the people I follow to also mention this project on Twitter.
  • Solved a annoying issue where the native filters on iPhone photos can't be accessed on a Windows computer because the filter is only part of the meta data. The previous solution was to email myself the photo. A better solution is this crazy hack that uses the Dropbox API. I auto sync with Dropbox and this way I just select which photos I want to save back into Dropbox with the filter baked in them. Magic!
  • And my latest hack is Citytrip which collects recommendations from Airbnb hosts and lists them on a map. Great for discovering local places to eat and drink. More details on the Producthunt listing.

All of these projects scratch a personal itch, are developed in a handful of hours, and more important were fun to make and able to capture some of the web's attention.

Posted by

Product Hunt hunters analyzed

Exploring the insiders in the startup scene

Only live to the public for 100 days, Product Hunt is already bolstering an impressive affluent user base. Launched initially as a 20-minute MVP by Ryan Hoover (ex-PlayHaven, Startup Edition) and further developed into an actual product together with Nathan Bashaw (General Assembly), its user engagement and traction show they are on to something.

Their invite-only system has grown a user base featuring a who’s-who of the startup tech scene. Well known investors, founders, journalists, and developers are all hanging around.

My everlasting interest into the startup scene sparked me to look more close into these so-called tech insiders. A quick crawl of the site pulled in all 636 contributors, whom posted a total of 1571 products since October ’13.

By matching up their Twitter username with the Twitter API, I was able to pull in more profile details like location and Twitter reach.

Leaderboard

Instead of doing a lenghty write up of my findings, I've coded up a leaderboard that will update every 12 hours.

I ranked the most influental people who contributed at least one product on Product Hunt by their Twitter follower count:

Ashton Kutcher doer 15,674,053
Kevin Rose Partner, Google Ventures 1,447,113
Tristan Walker 2,79,690
Brad Feld Managing Director, Foundry Group 162,671
Johnny Shahidi Co-Founder, Shots 141,352
Hiten Shah Founder, KISSmetrics 141,192
MG Siegler 139,271
Nihal Fares Co-founder & Chief Product at Eventtus 100,937
Fred Oliveira Head of Product, Disruption Corp 59,428
Hunter Walk Partner, Homebrew 57,575

If we look at the site's activity, these hunters have most reach:

Ryan Hoover Product Hunt / Tradecraft 948 #77
Murat Mutlu Co-Founder, Marvelapp 653 #44
Jonno Riekwel Product designer, Jonnotie 438 #48
Dave Ambrose Venture Investor 353 #38
Kevin William David CEO,WalletKit 342 #42
Adam Kazwell Product Manager 342 #40
Robert Shedd Founder 334 #34
Geoffrey Weg Independent 313 #23
Nathan Bashaw Product Manager at General Assembly 298 #37
James Mundy Founder, Foundbite 277 #2

Follow all these hunters with this Twitter list. If there is an interest I'll add some more features to the leaderboard.

Posted by

Site search visualization using Google Analytics

Last summer Google re-launched its user search trends page. A page which features trending search keywords around the world. A cool addition is their visualization of these Google searches. While it is a great way to visualize data, it pretends the searches are happening at Google in real-time while if you dive into the code it's setup to only updates once every hour.

Next to that the Google Analytics team launched a developer API for real-time website reporting. The API allows queries on what visitors are doing on your site right now.

As a fun project I thought of combining both of these tools into one: visualizing visitor searches using the Real Time Reporting API data.

Launch project.

You can setup your own site visualization in only two steps: first authorize access to your Analytics data, select your site, and set the query parameter for your site search, usually the letter q. Click save, and see your visitor's searches appear live as a beautiful visualization .

If you don't have many visitors on your site you might be staring at a red screen for a while, do a search on your own site to see it appear. This is an video example of my visualization. Let me know if you run into any issues.

Featured on:

Posted by

Facebook values the privacy of its billion users at $4,500

Back in 2009 I found a major security exploit on both Facebook and the than popular MySpace which exposed access to a user's personal data. I reported both data leaks to the social networks, and weeks passed as I had to convince them of the major hole they left in their security. Reluctantly the leaks were closed after details appeared on Reddit's homepage. MySpace's PR quickly denied there was an issue at all (luckily a well-regarded TechCrunch reporter could confirm my Proof of Concept did work) and Facebook proposed to send me a t-shirt as thank you (which I never retrieved).

Since, website security has shown to never become fool-proof, leading to privacy breach news stories, diminishing user's trust in handing over their personal details and content. To counter act these security breaches (and the media exposure that comes with it) most internet giants (Facebook, Paypal, Google, Twitter, Github) have setup a so called whitehat security researcher program which allows for whitehat hackers to report security leaks for it to be patched and closed ("responsible disclosure"). In exchange of the disclosure and not actually exploiting the issue, there will no prosecution (!) and a finders bounty as reward. An idea initially developed in the software industry, due to a growing black market of parties buying exploits to setup botnets and whatnot detailed in this interesting The Economist article.

Facebook initiated such a program in 2011. This year, Facebook already lists 65 people who reported a confirmed vulnerability, which Facebook defines as "[a vulnerability] that could compromise the integrity of Facebook user data, circumvent the privacy protections of Facebook user data, or enable access to a system within the Facebook infrastructure". In other words, which puts user privacy at risk. For example the researchers Nir Goldshlager, Homakov and Isciurus are all filling up their own blogs detailing their numerous security exploit findings on Facebook and have become well known on the YC news homepage.

Actually, 66 people reported a vulnerability in 2013

Given my experience in 2009 (still not a proud owner of a Facebook t-shirt) and intrigued by the bold sentence on Facebook's security researcher page "There is no maximum reward" I went out and started giving Facebook's code another peak. Tracking several Facebook developer plugins, I stumbled upon a interesting Flash file used by Facebook which serves as proxy for communicating data between domains (to work around Cross-domain browser limitations). For me a clear sign to keep digging. A few hours later, by jumping a few hoops using cleverness, juggling subdomains around, and walking around a regex, I was able to load any user data or content using the user's own Facebook session, as shown in this proof of concept video: http://www.screenr.com/SWi7 [2m11]. In the video I pull in a user's private email addresses, but it could also easily access any content, including items tagged with the privacy restriction "Only Me". Yikes!

My excitement of the discovery was obviously high, given my exploit completely exposed, just like in 2009, a complete Facebook user account.

Facebook's bounty

Facebook holds an unprecedented amount of personal data and content in quantity as well as in quality, which can't be compared to any other (online) entity. There lays also one of their biggest operational risks. After finding the leak, I disclosed the details to Facebook's security team. After confirmation, my disclosure of the exploit accessing a user's account was awarded with the bounty of... $4,500. A nice day's pay, but a paltry fee for pointing out a gaping hole in the security of a social network holding the personal data of over a billion people. Without the disclosure of whitehat hackers, like I did, these exploits can also become available to dubious parties who could wreak (digital) havoc. An example could be an rogue ad network who would love to harvest and tie in the users Facebook identity for ad targeting. Facebook's PR reaction on these exploits is usualy that they haven't seen actual usage "in the wild", but obviously if it would be abused, it would be in silence (while also penal).

Even aside from whether the discovery & disclosure is worth only USD 4,500, if you hold that against the continuous struggle Facebook has with privacy of its users, the costs of its thousands highly skilled employees, and the real implications of exposing user's data to actual dubious parties, $4,500 is clearly a small sum for the help in protecting Facebook's users personal data. With 66 exploit disclosures in the past 5 months you can wonder how many exploits are found but not disclosed to Facebook, and also wonder why Facebook is so dependent on external security research & disclosure for its user data security.

Update: /r/netsec lists some more technical details if you are interested and Hacker News has an interesting discussion.

Update #2: I'm now also listed on facebook.com/whitehat.

Posted by

1 2 3 4 5