I took a quick catch-up on the aspect of keyword density again. One of the popular articles I read came from the authoritative site of SEOBook.com. Upon finishing the text, here are some glaring errors that were "spotted!" by me:
Early / primitive search technology was not very sophisticated due to hardward & software limitations. Those limitations forced early search engines like Infoseek to rely heavily on documents for relevancy scoring. Over the past 1.5 decades search engines have grown far more powerful due to Moore's law. That has allowed them to incorporate additional data into their relevancy scoring algorithms. Google's big advantage over earlier competitors was analyzing link data.Source: http://tools.seobook.com/general/keyword-density/
Dr. E. Garcia explained why keyword density was a bad measure of relevancy in The Keyword Density of Non Sense.
Search engines may place significant weight on domain age, site authority, link anchor text and usage data.
Each search engine has it's own weighting algorithms. These are different for every major search engine.
Each search engine has it's own vocabulary system which helps them (it) understand related words.
Some might place more weight on the above domain-wide & offsite factors, while others might put a bit more weight on on-page content.
The page title is typically weighted more than most any other text on the page.
The meta keywords tags, comments tags, and other somewhat hidden inputs may be given less weight than page copy. For instance, most large scale hypertext search engines put 0 (zero) weight on the meta keyword tag.
Page copy which is bolded, linked, or in a heading tag is likely given greater weighting than normal text.
Weights are relative.