Perfect example, near the top there's some text about "It's made up of over[........] 30 TRILLION[.........] INDIVIDUAL PAGES[........] and it's constantly growing." But there's nothing to indicate that I should stop somewhere and wait for some more text to show up.
Maybe they should limit how far down you can scroll by setting the height of some element, and only increase it when the animation is finished.
Edit: the key problem here isn't the "scrolling makes things happen" gimmick that's popular lately. the problem is that it starts certain animations or fade-ins some time after I've already skipped past an apparently blank space.
After your comment I noticed a lot of comments on the same issue, so I decided to try it again. The second time I noticed that a blue arrow flashes at the bottom of the screen after all the content has populated, almost promoting you to scroll down. I suppose most everyone, including me, initially scrolled to fast to even see the first "arrow/prompt". Despite the discovery of the prompt feature, some of the issues remain. Example, wondering how far to scroll down before stopping (maybe pgdn?), and wondering which parts of the "page ? Exhibit ? Installation ? whatever it's called" are interactive.
>page ? Exhibit ? Installation ? whatever it's called
I too was unsure what to call it, but if they listen to the feedback, I think "whatever it's called" is really awesome and could be a legitimate substitute to the ppt platform. At least I would be interested in making a few presentations with it.
Agreed, I like it. It looks like the stuff that goes on in futuristic movies, with a lot of things happening on the screen and no one is really looking at anything particular but it's impressive. This has a potential. Who's up to make this WIC software?
now an anecdote (because i feel like telling one): this week started for me with an interview that finally got published http://werbeplanung.at/news/marketing/2013/02/interview-mit-... (it's german) in that interview i claimed that
* 80% of everything written about SEO and Google is bullshit
* that all the rumors, tipps and trends are actually hurting business
* that we should treat SEO as a numbers based craft of constant optimizations
* instead of the esoteric bullshit art it is currently
* and, if search traffic is important for the success of a business, they must rid themselves of external (agency) dependencies and develop internal structures
nothing to far fetched i think. everybody knows the SEO vertical is full of bullshit, i just took some time to estimate a number (based on a random sample of collected blogposts (that at least one person tweeted about))
yeah, i got a lot of angry emails, skype messages, linkedin messages, xing messages after the interview was published.
most of them mentioned at least one of these words
* pagerank
* whitehat
* blackhat
* grayhat
* linkjuice
* panda
* pinguin ...
so yeah, thx google for educating people about search. keep up the good work.If you read the right sources a majority of seo advice is correct.
Www.seomoz.org http://static.googleusercontent.com/external_content/untrust... Www.inbound.org (homepage stuff that has been voted up.)
That's a contradiction. If you have to read the right sources, then by definition the majority of advice is not correct.
"80 Prozent von allem, was über SEO geschrieben wird, ist Bullshit"
some things just cross the language boundaries.At first I thought it was supposed to represent a Gaussian-like probability distribution. But when I clicked on it, the resulting animation showed a series of such distributions getting flattened by some kind of distribution-flattening hydraulic press. The accompanying caption: "Gets to the deeper meaning of the words you type."
If I was confused before, now I was completely lost.
How is deeper meaning represented by distribution flattening? I'd think it would be just the opposite, raising probability mass around the likely meanings, not spreading it out into a uniform distribution over all meanings.
Baffling.
If anyone has figured it out, please do share.
(Maybe I'm taking the diagrams too seriously.)
EDITED TO ADD: New option: If you don't have any clue what it means either, come up with an entertaining yet plausible story that fits the hydraulic-press-vs-mustaches animation and share that story instead.
EDITED TO ADD: Example: At Google’s new eco-friendly data centers, NLP computations are performed by genetically enhanced inchworms. Difficult queries, however, can cause the inchworms to get cricks in their backs. In such cases, Google’s innovative back-massager descends and restores the inchworms to their preferred position (prone), from which they can return to their computations with renewed vigor.
But the way I interpreted it was, before, the query was short, scrunched up, and slightly ambiguous. The algorithm them lengthened it, representing expanding it to find the deeper meaning.
That search is very complex (I knew that, but not with this technical detail).
Or...that Google is trying very hard to maintain user interest with gimmicky shows of why it's cool and cutting edge and necessary.
Not that Google isn't those things...this just seems like an unnecessary expenditure of time. We know it's complex Google. Improve some other features and stop shutting others down instead of making these web 2.0 animations.
There are many things that are still broken in search; I talk about one specific experience here:
http://urgeous.com/p87t3aaa40g-for-some-queries-all-first-10...
("For some queries, all first 10 results on Google are spam").
Very nice page, though.
Thousands of small sites were killed by Panda for no good reason, and have little hope of getting their traffic/incomes back. Google's spam policy is skewed heavily in favor of large sites and their own properties.
Crap factor = %advertising on page.
edit: Here are just the queries:
"sublime text 2" "focus group"
cisco "anyclient" - this one gets silently rewritten to cisco anyconnect
shopify "deduplicate" - with verbatim activated -
Am I the only one who finds it irritating as hell to scroll when it renders slow? I don't think this is the end game. There has to be something better.
"In order to provide some of the core features in Google Apps products, our automated systems will scan and index some user data. For example:
-Email is scanned so we can perform spam filtering and virus detection.
-Priority Inbox, a Gmail feature, scans email message to identify which messages are considered important and which are considered not important.
-If you are using Google Apps (free edition), email is scanned so we can display contextually relevant advertising in some circumstances.
-Some user data, such as documents and email messages, are scanned and indexed so your users can privately search for information in their own Google Apps accounts.
*Google Apps data is not part of the general google.com index, except when users choose to publish information publicly."
I know, I know, you don't do that. Nope, no one does. Everything is fine and dandy. Smile every one, no problem here.
However it does give a better insight into the challenges of building a search product. It is a series of really challenging problems. So many people take search for granted these days.
100 million gibibytes ~= 95 pebibytes
1. Spam detection is automatic
2. There 6 types of spam
-Unnatural outbound links (link selling)
-Content copy/manufactering
-Keyword stuffing
-Forums/user generated spam
-Parked domains
-Sites hosted on spammy DNS
-Different content humans and bots
-Hacked sites
3. Google is removing as many as 50K spam sites per month, they get 8K reconsideration requests
4. Google's machine learned relevance model may be using about 200 features
Aren't these just some random numbers that they pull out of the air?
var kd = function () {
function a() {
e = e || Q("number_of_seconds");
d = d || Q("searches_count_num");
f = f || Q("searches_count_unit");
var a = ~~ (((new Date).getTime() - h) / 1E3 % 86400),
k = a * b + "";
f.innerHTML = " " + c[Math.ceil(k.length / 3)] || "";
e.innerHTML = a;
d.innerHTML = k.replace(/(\d)(?=(\d\d\d)+(?!\d))/g, "$1,")
}
var b = ~~ (1E11 / 2592E3),
c = " hundred thousand million billion trillion quadrillion quintillion sextillion septillion octillion nonillion decillion undecillion duodecillion tredecillion quattuordecillion quindecillion sexdecillion septendecillion octodecillion novemdecillion vigintillion".split(" "),
e, d, f, h = (new Date).getTime();
return {
hc: a,
rb: function () {
a();
setInterval(a, 100)
}
}
}();
It's just running on an interval and doing in-page calculations, so it's entirely estimated. The value of "b" in this function evaluates to a little over 38,000 (https://www.google.com/search?q=1E11+%2F+2592E3) which they're using as the basis for the calculation.I don't think so. It seems logical that Google's been keeping statistics about this sort of thing, so it doesn't surprise me that they keep track of such things as 'average queries per second'.
Google search results show a time value for each search. E.g.: About 2,210,000,000 results (0.12 seconds). Is this time machine time per search? This number is often around 30 ms, give or take a factor of two. If so, each machine can handle about 30 searches per second. If so, 38K searches per second need about 1000 machines. Sounds a bit too low... so my interpretation must be wrong at least somewhere.
Since all of those queries are fired at the same time, the only metric that matters at the end is the wall time, not the CPU time used during the query.
I also seriously doubt that the servers that handle the Google front page can only do one query at a time; at the very least, they're multithreaded, but probably concurrent. It probably works as below:
1. Parse query 2. Send query to backend servers 3. Wait until all backends replied or at most 250ms (or some other timeout) 4. Assemble the result page and ship it back to the client
While the server is idling for the backends to reply, it probably processes other queries; it wouldn't make sense to waste that much CPU power.
Finally, your example says 0.12s (a random query on my end gave a response time of 0.69s), which is 120ms (or 690ms for mine), which is more than twice 30ms.
function a() {
e = e || Q("number_of_seconds");
d = d || Q("searches_count_num");
f = f || Q("searches_count_unit");
var a = ~~ (((new Date).getTime() - h) / 1E3 % 86400),
k = a * b + "";
f.innerHTML = " " + c[Math.ceil(k.length / 3)] || "";
e.innerHTML = a;
d.innerHTML = k.replace(/(\d)(?=(\d\d\d)+(?!\d))/g, "$1,")
}
var b = ~~ (1E11 / 2592E3),
So yes.No kidding.
Source: I do hacking on top of lucene.
There's a slight oversight, it should be: "We write programs & formulas to deliver the most profitable results possible for this quarter"
As to your point, yes, Google does utilize its power, leverage, dominance to favor itself and its own products - and don't feel to bad others are demanding you show your proof - how quick they are to forget (and apparently Google's own employees who replied to your comment forgot also) that the FTC spent the last year investigating Google's behavior on this front - some of those charges into Google's behavior include using its knowledge of search and advertising to determine the most profitable online businesses, entering the space with their own product to compete directly or just drive up the price of the advertising terms (sometime 1000%). So imagine you were buying key word "y" for "$x"/click - Google comes along and competes, now their product is at the top of the organic results and you will need to pay (1000 x "$X") for the same advertising - oh by the way when they pays (1000 x "$x") for the same ad space that money just goes back to its own pocket.
So do not feel to bad - the FTC spent millions investigating Google to find said evidence and ultimately allowed Google to settle for $22.5 million, Google allowing others to use the Motorola patents is acquired, and changing their AdWords API. And keeping with their motto: "Don't be evil" it appears in the last 24 hours media has gone wild alleging Google spent $25,000 to honor the FTC Director during the investigation - I know when I am being investigated for federal anti-trust allegations I too like to honor the investigator, and like Google I do not give the investigator the money directly, I give it to a 3rd party who in turn gives it to the investigators office, this allows the investigator time to close the case before allegations are made and when allegations are eventually made it allows the investigator the opportunity to say at the time it was unknown who "donated" money for the honorarium.
What about when Google rolled out universal search only after buying YouTube?
Search is rank and display. Products is 100% bought and that you had to "disclose". I say "disclose" because it's not apparent, unless searches click on a link, that's how ethical you are.
What else is bought and paid for behind the scenes? Why should we trust you?
Says... you, right? Based on which examples of Google pursuing quarterly profits at the expense of users?
Duh! Google local was a joke compared to better pages from Yelp. Google+ even worst. Pages are now filled with ads, because Google discovered that ads yield better results (how convenient!) Need I go on?