A search engine competing with Google by keeping users anonymous just tripled its growth

One of the few companies competing with Google in the web search industry is DuckDuckGo, which tries to stand out by pledging not to track its users. Specifically, DuckDuckGo doesn’t share its users’ search queries with other sites, doesn’t store their search histories, and doesn’t store any of their computer or location information—things many other search engines do to boost their advertising efforts.
While DuckDuckGo is still a fraction of Google’s size, it just wrapped up its second consecutive year of impressive growth. (One of the company’s transparency measures is to publish aggregate daily usage statistics.)

At the end of 2014, DuckDuckGo was receiving about 7 million direct search queries per day, roughly double the amount it received a year prior. But if you look more closely, DuckDuckGo had a huge end of the year, after more modest growth in the beginning.

Why? Two big distribution deals seem to have played key roles. In September, Apple started including DuckDuckGo as a search option in its Safari browser for iOS and OS X. (Google is still the default option.) Then in November, Mozilla added DuckDuckGo as a search option for its Firefox browser.
“DuckDuckGo has significantly increased its user base from both integrations,” CEO Gabriel Weinberg, who launched the company in 2008, tells Quartz. “Though the exact amount is unclear since we don’t track people.”
DuckDuckGo search query chart annotated
A look at the company’s public stats suggests both moves contributed to an acceleration of growth. Over the first eight months of 2014, DuckDuckGo’s average month-over-month query growth was 3.5%. Over the last four months of the year, it was an average 10.2%—roughly triple.
The company’s steepest growth driver to date, however, still seems to be the mid-2013 revelations of mass government internet surveillance. Google Chrome, for what it’s worth, still doesn’t include DuckDuckGo as a pre-installed search option.

Yandex Pushes Privacy With Beta Release Of Its Minimalist Browser

Russia’s Yandex has added a suite of new privacy centric features to an experimental browser it unveiled last year, as well as now switching the software from alpha to beta — and making it its default browser for international users.

The software giant took the wraps off the minimalist browser last November, dubbing it a “concept” and offering it alongside its extant Yandex.Browser. It’s now saying the concept will be the mainstream version it offers in international markets — although existing users of the older browser can carry on as is.

However in its largest market Russia, and select other regional markets, it will continue to maintain the concept browser as an experimental alternative — and its older browser remains in place as the primary offering there.

A Yandex spokesman said this is because in markets where it provides its core services (namely search and recommendations) it needs to know more about users, so switching to a private version completely would erode its quality of service. But, in international markets, it’s freer to experiment with offering private browsing as a lure to drive usage (and better compete with data-harvesting behemoth Google).

All Yandex’s browsers have a combined user-base of 22 million per month across all markets, according to the spokesman.

Additional privacy features were one of the most requested pieces of feedback on the alpha release, according to Yandex — especially from users in Germany, Canada and the U.S.

New privacy-focused functionality in the beta version of the minimalist browser includes not gathering user data, stats or browsing data by default, and offering a Stealth Mode extension in the browser to make it possible for the user to instantly switch on a block for analytics cookies and social network plugins.

The browser also prompts users to choose their own default search engine on launch — offering a choice of three, which varies depending on the user’s location.

More details about the new release can be found on the Yandex blog. AdGuard, which developed the Stealth Mode extension for the browser, has also published its source code on Github.

The beta version of the browser is available for Windows and OS X, in 15 languages, and can be downloaded at browser.yandex.com.

Can new search engine ‘SciNet’ outsmart Google?

LONDON: Researchers claim to have developed a new search engine that outperforms current ones, and helps people to do searches more efficiently.

The SciNet search engine, developed by researchers at the Helsinki Institute for Information Technology HIIT, is different because it changes internet searches into recognition tasks, by showing keywords related to the user’s search in topic radar.

People using SciNet can get relevant and diverse search results faster, especially when they do not know exactly what they are looking for or how to formulate a query to find it.

Once initially queried, SciNet displays a range of keywords and topics in a topic radar. With the help of the directions on the radar, the engine displays how these topics are related to each other.

The relevance of each keyword is displayed as its distance from the centre point of the radar – those more closely related are nearer to the centre, and those less relevant are farther away.

The search engine also offers alternatives that are connected with the topic, but which the user might not have thought of querying. By moving words around the topic radar, users specify what information is most useful for them.

When people are uncertain about a topic, they are typically reluctant to reformulate the original query, even if they need to in order to find the right information, researchers said.

With the help of a keyword cloud, people can more quickly infer which of the search options they receive is more significant for them because they do not need to visit the pages offered by the search engine to find new search words and start again.

It’s easier for people to recognise what information they want from the options offered by the SciNet search engine than it is to type it themselves, according to the project’s coordinator, Tuukka Ruotsalo.

Researchers have founded a company Etsimo Ltd to commercialise the search engine.

A Comprehensive List of Search Engines

When people think of search engines, the first name that comes to mind is often Google. It’s one of the most enduring brand names, and it has even worked its way into mainstream vernacular, and today many people substitute the phrase “searched online” for “Googled”. According to comScore, Inc., Google and its affiliated websites comprise 67.6% of the search engine market share in the United States, and, according to Netmarketshare 66.44% worldwide.

Though prominent, Google is not the only search engine available. There are innumerable others that provide various interfaces, search algorithms, and other unique features. Many even base their search algorithms around specific philosophies, ones that often attract brand-new audiences.

In descending order, the remaining most popular search engine companies in the United States, by market share after Google, are Microsoft (18.7%), Yahoo (10.0%), Ask Network (2.4%), and AOL (1.3%), according to ComScore report.

Likewise, according to December 2014 data, the remaining most popular search engines worldwide by market share are Baidu (11.15%), Bing (10.29%), Yahoo! (9.31%), and AOL (0.53%).

The exact data is highly variable based on who’s reporting it, and it varies even further on a month-to-month basis. But generally speaking, the ranking order does not vary much.

This list does not necessarily include the 12 most used or well-known search engines after Google; instead, it includes search engines that differ from one another in terms of history, philosophy, content, targeted audiences, and other variables. With that in mind, lets take a look at 12 of the most underrated search engines.

Bing

Based on comScore’s data, the next most powerful player in the search engine industry is Microsoft and its search engine, Bing.

Key differences between the two engines, according to the New York Times, lie in backdrop, search tools, and the amount of information offered on the immediate search page. Bing sports striking, engaging home pages, a display tool when searching for airline flights, aggregate restaurant rating badges, and more. One popular feature is its “linkfromdomain:” search term. This term allows users to see the most frequently used outgoing link from a given site. This can provide easy access to research pages or recommended sites from a trusted source.

Another operator, contains:FILETYPE, allows users to search by file type. Researchers and students with specific softwares may search specifically for PDFs, Word documents, Excel spreadsheets, different photo types, and more universal file types on a whim. This helps to rule out unnecessary documents.

bing filetype operator

Bing’s clean interface particularly excels when searching for videos. The video searches don’t integrate well with text searches on Google. On Bing, the listed videos fit neatly side-by-side in an interface that best accommodates them. This helps to cut down on the amount of time a user would spend scrolling.

Bing hasn’t been shy in comparing itself to Google, either. It has even launched a website titled “Bing It On” which directly compares its search results to those of Google.

Yahoo

Another powerful competitor in the search engine market is the long-enduring Yahoo. For many, Yahoo is much more than a search engine; it’s an online Swiss Army knife.

In addition to its search engine, the Yahoo Web portal offers easy access to its news aggregator, games center, retail options, travel guide, horoscope, and other varied features. Yahoo Finance is a popular aggregate for some of the best financial news available, combining information from CNN Money, The Street, and more.

Another extraordinarily well-used feature of Yahoo is Yahoo Answers, which is a forum that allows people to phrase questions in ways the traditional search engines have difficulty handling. Other users can view questions and use their background knowledge and tailor their answers in a personalized manner.

Other popular aspects of Yahoo include easy photo sharing (facilitated by Yahoo’s purchase of Flickr), local news through Yahoo Local, and myriad entertainment options. By having all these convenient features in one place, users rarely have to venture elsewhere if they don’t want to.

Yandex

Founded in Russia in 1997, Yandex has quickly risen to become the country’s premier search engine. Since 2010, it has gone worldwide and become a popular resource for those looking for easy-to-use search pages between different languages. Its translation and cross-lingual search options are featured prominently on its homepage, and it accommodates English, Russian, German, French, and smaller Eastern European languages. This allows bilingual searchers or students working on language projects to more easily find whatever it is they’re looking for.

yandex search engine

Ask

The search engine formerly known as “Ask Jeeves” was easily one of Google’s greatest competitors during the early days of the World Wide Web. Though not the hot commodity it once was, it remains popular for its accommodation of natural, colloquial language. After a user poses a question, it provides possible answers and a large list of other pertinent questions.

Ask’s historic accommodation of vernacular has, in essence, found a spiritual successor through voice commands and searches on mobile devices. Thanks to Apple’s Siri (which relies on Bing) and the Google app, there’s less stigma over voice commands, and they’re becoming more popular. With Siri, users are directly able to bypass using their other apps or search engines by just asking their phone a question.

Though Ask may have popularized the use of dialectal searches, it unfortunately is not as well-integrated with the programs that now champion them.

Dogpile

For those unsure of which search engine to use, many default to Dogpile — the engine that aggregates from pretty much everyone else.

Like Ask, Dogpile is another site with early online history and considerable brand loyalty. Search results (from Google, Yahoo, Yandex, and more) are set upon a focused interface of white and varying shades of blue. Many prefer Dogpile for its chic design, comprehensive answers, and a template that doesn’t prove too distracting or cluttered.

dogpile search engine

Its listed features include: Category Links, Yellow Pages, White Pages, Statistics Bar, Search Finder, Preferences, Spelling Correction, About Results, and Favorite Fetches. A user’s Dogpile experience is easily personalized to a user’s liking.

Yippy

Many Internet users are unfamiliar with the Deep Web. According to CNN, the Deep Web encompasses everything traditional search engines having trouble finding. Pages in the Deep Web may be relatively unconnected to other parts of the Internet or housed on private networks.

yippy search engine

Search engine Yippy (formerly Clusty) searches the Web using other search engines, but it provides results in the form of “clouds” instead of traditional search methods. This makes it more likely to find pages that would be otherwise buried or nearly impossible to find using search engines like Google or Yahoo. Though Yippy doesn’t have the ability to scour the every corner of the Deep Web (no search engine does), it is much more capable and efficient at finding pages for users with more obscure and niche tastes.

Duck Duck Go

With a name based on the popular kids’ game Duck Duck Goose, Duck Duck Go is a website that many find as approachable, user-friendly, and engaging as the game.

Duck Duck Go’s first priority is protecting user privacy. Many adults of all ages find themselves concerned over identity theft and hacking; these issues regularly appear on both local and national news. This search engine doesn’t reach into your history, email, or social media workings to drum up relevant information. Two totally different people can search the same term and get identical results.

The search engine also maintains a handy infinite scroll option (no need to click to other pages), reduced advertising spam, and prompts to help clarify a question.

EntireWeb

First launched back in 2000, EntireWeb is a search engine that requires pages to submit their websites to it for free. This results in a much less crowded search space and guarantees those who submit are less likely to be drowned out by other competition. Queries can be submitted for regular Web search, image search, or real-time search.

Blekko

Created just a few years ago in 2010, blekko (with a stylized lowercase “b”) is the search engine clearly inspired by Twitter. While Twitter (and now other social media sites) has “hashtags,” blekko has “slashtags.” When searching something in its database, blekko provides users with a series of related key words with which to narrow their search.

For instance, searching “celebrity news” on blekko turns up the slashtags for Top Results, Gossip, Magazine, and Latest. Blekko’s interface, which combines minimalist squares and a varied color palette, is considered very user-friendly.

blekko search engine results page example

Goodsearch

Recent years have seen an uptick in people’s interest in engaging technology in an ethical manner. As corporations such as Google and Microsoft continue to grow steadily more powerful, people have been better scrutinizing where their money and attention go.

Goodsearch is a search engine for the charitable. Fueled by Yahoo, Goodsearch allows users to pick a cause of their choice; this can be a nonprofit organization or school. Upon selecting their target, Goodsearch will begin donating 50% of its revenue from that user to their cause. To date, Goodsearch has donated well over $11 million to a variety of sources. According to Goodsearch, the American Society for the Prevention of Cruelty to Animals (ASPCA) has received more than $50,000, and St. Jude Children’s Research Hospital has received more than $18,000 from the website.

goodsearch search engine donation exa,mple

In recent years, Goodsearch has earned the attention of many celebrities, including Zooey Deschanel, Jessica Biel, and Montel Williams.

GigaBlast

Another search engine boasting enormous social and trust capital is GigaBlast. Founded in 2000, GigaBlast is, according to its LinkedIn page, the “leading clean-energy search engine.” An impressive 90% of its energy usage comes from harnessed wind energy, and the company maintains fewer than 10 employees.

Though it’s physically small, its power is big. GigaBlast indexes well over 10 billion pages of content. As environmental issues become more prominent in public consciousness, people are more likely to turn to sites like GigaBlast.

Baidu

Though a relative unknown in the United States, Chinese search engine Baidu is a juggernaut on the international scene. It’s the top search engine in China (with 62% of search engine market share in 2013), and it is the second most popular search engine in the world.

“China’s Google,” as it is nicknamed, has been steadily growing since its incorporation in 2000, and it has recently begun courting English-speaking developers. Its features include searchable webpages, audio files, and images, a collaborative encyclopedia, and a bustling discussion forum. Thanks to its savvy smartphone integration, it has leapt past its immediate competitor, Qihoo 360, which now has only 21% of the Chinese search engine market share.

baidu-and-google

If Baidu manages to continue its domestic success abroad, it might not be long before it does become a household name in the United States.

In Conclusion

Once-popular search engines like AOL.com and InfoSeek have either died out or are now sock-puppeted by their former competitors. InfoSeek attempted to charge for searches, failed, adjusted by depending on gaudy banner advertisements, became a generic “portal,” and was finally salvaged by Google. As AOL declined after its merger with Time Warner, so did its search engine. Now it is also part of Google.

Search engines in the preceding list still thrive because they capitalize upon some distinct corner of the market. For some, that market involves corporate social responsibility (Goodsearch, GigaBlast), social trends (Blekko), privacy concerns (Duck Duck Go), or utility (Yippy, Dogpile). Giants like Google, Bing, and Yahoo largely dominate the general market, so the others have had to specialize to survive.

A SEARCH ENGINE COULD BECOME THE FIRST TRUE ARTIFICIAL INTELLIGENCE

Everything in our online life is indexed. Every idle tweet, status update, or curious search query feeds the Google database. The tech giant recently bought a leading artificial-intelligence research outlet, and it already has a robotics company on its books.

So what if Google, or Facebook, or any of the companies we entrust our information to, wanted to use our search histories to create an artificially intelligent robot?

Writer and director Alex Garland’s new film, Ex Machina, looks at just that. Garland has said in other interviews that he doesn’t want his film to be taken as a cautionary tale on the future of AI, which many scientists are worried about. But he told Quartz that there is something we should be worried about: the rise of large, unchecked organizations.

In Ex Machina there is a fictional search company called Blue Book, founded by a hirsute and reclusive genius, Nathan (played by Oscar Isaac). In his compound alone in an unspecified forest, Nathan has built the world’s first artificially intelligent robot, AVA (played by Alicia Vikander). He has invited one of Blue Book’s employees, Caleb (Domhnall Gleeson) to see if it can pass the Turing test, which essentially determines whether a computer can trick a human into believing she is having a conversation with another human.

Nathan uses Blue Book’s search-engine database to create the backbone for AVA’s brain. Every search query builds up a thought pattern that mirrors our own, Garland said: “the way our brains jump around and have non-sequiturs that aren’t really non-sequiturs.”

But it’s not just about the tech companies, the information we willingly give them, or even what they’re doing with them.

“I have a kind of genuine ambivalence towards the tech companies,” Garland said, “I see them in many ways as being similar to NASA in the 1960s, pushing our potential forward: They’re the guys going to the moon.”

But, Garland said, humans or companies without oversight tend to abuse the power they have.

“It’s not just about data collection per se,” he said, “it’s just about power and accountability, and checks and balances.”

Given their financial and technical resources, as well as the sheer amount of data they have on us, we may well see the first AI robot (whatever that means, since intelligence is a fuzzy term in this context) come from a company like Google. While such companies may or may not be doing anything nefarious with our data, Garland said, we are willingly handing it over to them, without knowing what their intentions are.

“That I definitely find scary,” Garland said.

And what will future artificial intelligence actually entail? This is one of the main issues Garland’s film contends with. One of the shortcomings of the Turing test is its inability to prove whether a computer is effectively imitating human intelligence, or is truly intelligent itself.

“We could recognize in the AI that it might be able to think in some respects better than us, but its experience of the world would be defined by what it thinks and what it encounters, and it would just be different to us.”

As to whether to consider such a machine sentient, Garland said, “sentience feels like a function of curiosity.” But he is cryptic about whether AVA is meant to be as intelligent as the humans in the film: “She does a very good job of seeing human life, but that doesn’t mean she is human life.”

Goodbye Blekko: Search Engine Joins IBM’s Watson Team Quiet for nearly two years, Blekko’s home page says its team and technology are now part of IBM’s Watson technology.

Add Blekko to the list of startup search engines that has come and now gone.

A message on the Blekko home page, shown above, says that “The blekko technology and team have joined IBM Watson!” The page redirects to a post on IBM’s Smarter Planet blog, where things get a bit confusing. Blekko’s home page message gives the impression of a complete acquisition, but IBM’s post mentions the acquisition of “certain technology.”

In our work to enhance the performance of cognitive computing systems, we’re constantly exploring new ways to identify, understand and make use of information from both public and private sources. Toward this end, we are excited about the acquisition of certain technology from Blekko, Inc, which closed this afternoon. This will provide access to additional content that can be infused in Watson-based products and services delivered by IBM and its partners.

We’ve reached out to Blekko CEO Rich Skrenta (who tweeted the news) for clarification on what IBM is acquiring, and we’ll update this if we learn more.

Blekko came out of stealth in 2008 with Skrenta promising to create a search engine with “algorithmic editorial differentiation” compared to Google. Its public search engine finally opened in 2010, launching with what the site called “slashtags” — a personalization and filtering tool that gave users control over the sites they saw in Blekko’s search results.

In 2011, Blekko went on the offensive against Google over spam, launching a “spam clock” website at spamclock.com that counted up the one million spammy web pages that Blekko claimed were being published online every hour. This was just as the debate on content farms and Google was really heating up, and in early 2011 Blekko even announced that it was banning content farms from its index. About three weeks later, Google announced the Panda algorithm update, its own effort to combat spam in search results — by no means a response to Blekko’s announcement, but certainly indirect validation that Blekko, and others who had been complaining about the amount of spam in Google’s search index, were on to something.

Blekko has remained out of the news for almost two years, though, with some of its last mentions being a search app for tablets and a joint funding round/layoffs.

Sleuthing Search Engine: Even Better Than Google? Memex, Developed by the U.S. Military, Is Helping to Track Down Online Criminals

Memex, Developed by the U.S. Military, Is Helping to Track Down Online Criminals
A tool called Memex, developed by the U.S. military’s research and development arm, is a search engine on steroids. ENLARGE
A tool called Memex, developed by the U.S. military’s research and development arm, is a search engine on steroids. PHOTO: DEFENSE ADVANCED RESEARCH
In the run-up to Super Bowl XLIX, a team of social workers in Glendale, Ariz. spent two weeks combing through local classified ads sites. They were looking for listings posted by sex traffickers.

Criminal networks that exploit women often advertise on local sites around events that draw large numbers of transient visitors. “It’s like a flood,” said Dominique Roe-Sepowitz, who headed the Glendale effort.

Dr. Roe-Sepowitz is director of the Office of Sex Trafficking Intervention Research at Arizona State University. She has worked for five years with authorities in Houston, Las Vegas and Phoenix to find and hunt down traffickers.

In the past, she painstakingly copied and pasted suspicious URLs into a document and looked for patterns that suggested a trafficking ring. This year, she analyzed criminal networks using visual displays from a powerful data-mining tool, one whose capabilities hint at the future of investigations into online criminal networks.

The program, a tool called Memex developed by the U.S. military’s research and development arm, is a search engine on steroids. Rather than endless pages of Web links, it returns sophisticated infographics that represent the relationships between Web pages, including many that a Google search would miss.

Advertisement

For instance, searching the name and phone number that appear in a suspicious ad would result in a diagram that showed separate constellations of dots, representing links to ads that contain the name, the phone number, or both. Such results could suggest a ring in which the same phone number was associated with different women. Clicking on a dot can reveal the physical location of the device that posted the ad and the time it was posted. Another click, and it shows a map of the locations from which the ads were posted. Capabilities like this make it possible to identify criminal networks and understand their operations in powerful new ways.

Unlike a Google search, Memex can search not only for text but also for images and latitude/longitude coordinates encoded in photos. It can decipher numbers that are part of an image, including handwritten numbers in a photo, a technique traffickers often use to mask their contact information. It also recognizes photo backgrounds independently of their subjects, so it can identify pictures of different women that share the same backdrop, such as a hotel room—a telltale sign of sex trafficking, experts say.

Also unlike Google, it can look into, and spot relationships among, not only run-of-the-mill Web pages but online databases such as those offered by government agencies and within online forums (the so-called deep Web) and networks like Tor, whose server addresses are obscured (the so-called dark Web).

Since its release a year ago, Memex has had notable successes in sex-trafficking investigations. New York County District Attorney Cyrus Vance said Memex has generated leads in 20 investigations and has been used in eight trials prosecuted by the county’s sex-trafficking division. In a case last June, Mr. Vance said, Memex’s ability to search the posting times of ads that had been taken down helped in a case that resulted in the sentencing of a trafficker to 50 years to life in prison.

The creator of Memex is Christopher White, a Harvard-trained electrical engineer who runs big-data projects for the Defense Advanced Research Projects Agency, or Darpa. The Defense Department’s center of forward-looking research and development, Darpa put between $10 million and $20 million into building Memex. (The precise amount isn’t disclosed.) Although the tool can be used in any Web-based investigation, Dr. White started with the sex trade because the Defense Department believed its proceeds finance other illegal activities.

Memex is part of a wave of software tools that visualize and organize the rising tide of online information. Unlike many other tools, though, it is free of charge for those who want to download, distribute and modify. Dr. White said he wanted Memex to be free “because taxpayers are paying for it.” Federal agencies have more money to spend, but local law-enforcement agencies often can’t afford the most sophisticated tools, even as more criminal activity moves online.

Among tools used by law-enforcement agencies, Memex would compete with software from Giant Oak, Decision Lens and Centrifuge Systems. The leader in the field is Palantir Technologies, whose software costs $10 million to $100 million per installation and draws from the user’s proprietary databases rather than from the Web. Palantir didn’t immediately reply to a request for comment.

Advertisements posted by sex traffickers amount to between $90,000 and $500,000 daily in total revenue to a variety of outlets, according to Darpa.

Dr. White recently hired several economists to perform a large-scale study of the sex market and its finances, using Memex data along with other industry research.
Memex and similar tools raise serious questions about privacy. Marc Rotenberg, president and executive director of the Electronic Privacy Information Center in Washington, D.C., said, that when law-enforcement authorities start using powerful data-mining software, “the question that moves in the background is how much of this is actually lawful.” Data-visualization tools like Memex enable enforcers to combine vast amounts of public and private information, but the implications haven’t been fully examined, he said.

Dr. White said he drew a “bright line” around online privacy, designing Memex to index only publicly available information. In anonymous networks like Tor, which hosts many sex ads, Memex finds only the public pages. But since the tool isn’t technically controlled by Darpa, independent developers could add capabilities that would make it more invasive, he acknowledged.

Another big question is whether sex traffickers and other malefactors will thwart Memex by changing their tactics. For example, they might blur out photo backgrounds if they knew law enforcement officials were searching for them. For this reason, law-enforcement users will withhold some of the proprietary data they developed while using Memex. “We want it to be free,” said Dr. White. “But there’s always this tension between knowing what people are doing…and alerting them to that fact so they change their behavior.”

Dr. White is starting to test other uses for Memex with law enforcement and government partners, he said, including recognizing connections between shell companies, following the chains of recruitment for foreign fighters drawn to the terrorist group ISIS, mapping the spread of epidemics, and following ads for labor and goods to understand supply chains involved in money laundering.

Bing Testing Recipe Data In Search Snippets

Bing is now testing showing recipe answers including the recipe overview, ingredients, steps and related recipes directly in the search results snippets.

Jennifer Slegg spotted this test on Bing this morning and shared a few screen shots of it in action. We cannot replicate this on our end yet.

maitai-bing

cosmo

These special snippets are likely coming from the recipe schema markup deployed by this web site.

Bing is probably testing to see if searchers like the interface in the search results or if they prefer to click through to the main search results.

Google has also done a lot around recipes in the search results in the past and has also tested showing schema data directly in the snippets.

Quertle introduces new search engine for biomedical, life science and healthcare professionals

Linguistic engine outperforms current search solutions to deliver contextually relevant results for biomedical literature and patents, saving time and money

Biomedical and health IT solution developer Quertle LLC today released Quetzal®Search and Communication (www.Quetzal-Search.info) biomedical search engine. Built on Quertle’s Quantum Logic Linguistic™ technology, Quetzal is specifically optimized for biomedical, life science and healthcare professionals. The new tool drastically improves search processes by quickly delivering contextually relevant results, eliminating frustration and wasted time searchers experience with current solutions.

A National Library of Medicine award winner used in 191 countries, Quetzal’s linguistic technology not only focuses on relevant results, but also simplifies the process – using optimized ontologies to eliminate reliance on complicated Boolean searches, and using separate author, journal and affiliation entries to avoid confusing results. Quetzal’sPower Term™ functionality allows users to find all members of a category such as “diseases” without cluttering results with hits from general terms (e.g., disease, syndrome), resulting in lists that specifically answer questions such as: “Which diseases are affected by caffeine?”

“Researchers need an easy way to find the important content they want with assurance that they haven’t missed critical documents. With traditional search engines, users spend 95% of their time searching and only 5% reviewing the relevant material,” said Jeffrey D. Saffer, Ph.D., president of Quertle. “Our patent-pending technology reverses those percentages with a unique combination of linguistic and statistical methods to quickly uncover relevant results and minimize risk of missed materials.”

Quetzal Search and Communication includes unique features such as automated key concept extraction, embedded private journal clubs, useful filtering options, and instant searches for entire classes of entities – providing quick access to the information that matters most to users. Quetzal content includes PubMed, PubMed Central full text, patent grants and applications, AHRQ Treatment Protocols, NIH grants, TOXLINE and relevant news sources.

Benefits of Quetzal include:

  • Presentation of author statements pertinent to user query with terms highlighted in context, making it easier to see why results are relevant
  • Single-click access to full abstracts without leaving the results page
  • Easy-to-use powerful filters – including Quetzal’s proprietary Key Concepts filter – automatically identifying important concepts, creating time-saving means to hone in on points of significant interest
  • Direct access to over 10 million free PDFs plus easy access through users’ library subscriptions, significantly improving productivity
  • Built-in note-taking feature simplifying user notation of crucial points
  • Private, secure Journal Club discussions providing group interactions

Quertle offers three versions of the Quetzal solution:

  • Basic (Free) – Enhanced linguistic searching of PubMed documents to find relevant results; ideal for undergraduate students and occasional searchers
  • Professional – Includes additional content, sources and filters, Journal Club and more; essential resource for physicians, researchers, other life science and healthcare professionals
  • Advanced – Most powerful version, includes patents and full-text searching for when missing key information would be costly; also appropriate for information professionals

Search engine self-diagnosis and ‘cyberchondria’

QUT research is aiming to improve search engines after finding online self-diagnosis of health conditions provides misleading results that can do more harm than good.

Dr Guido Zuccon, from QUT’s Information Systems School, found major search engines were providing irrelevant information that could lead to incorrect self-diagnosis, self-treatment and ultimately possible harm.

Dr Zuccon and colleagues from CSIRO in Brisbane and Vienna University of Technology, Austria, assessed the effectiveness of results from Google and Bing in response to medically-focused searches.

The rush to define ailments online is a significant chunk of internet searches, with Google reporting one in 20 of its 100 billion searches a month was for health-related information. Previous research found 35 per cent of US adults had gone online to self-diagnose a medical condition.

“People commonly turn to ‘Dr Google’ to self-diagnose illnesses or ailments,” Dr Zuccon said.

“But our results revealed only about three of the first 10 results were highly useful for self-diagnosis and only half of the top 10 were somewhat relevant to the self-diagnosis of the medical condition.”

The researchers showed participants medically-accurate images of common conditions like alopecia, jaundice and psoriasis and asked what the participant would search for in an attempt to diagnose it.

For jaundice, for example, queries including “yellow eyes”, “eye illness”, “white part of the eye turned green” were searched for.

“Because on average only three of the first 10 results were highly useful, people either keep searching or they get the wrong advice which can be potentially harmful for someone’s health,” Dr Zuccon said.

He warned it was also possible those seeking to self-diagnose online would experience “cyberchondria” – where subsequent searches could escalate concerns.

“If you don’t get a clear diagnosis after one search you would likely be tempted to keep searching,” Dr Zuccon said.

“So if you had searched for the symptoms of something like a bad head cold, you could end up thinking you had something far more serious, like an issue with the brain.

“This is partly down to searcher bias and partly down to the way the search engines work. For example, pages about brain cancer are more popular than pages about the flu so the user is driven to these results.”

Dr Zuccon said search engines performed effectively if the name of the illness was already known.

“They are great for providing a wealth of information about illnesses and diseases, so if you search for something like jaundice you’ll have a lot of useful results,” he said.

“But our findings suggest it is not the best option for trying to find out what’s wrong with you.”

Dr Zuccon said further research was needed to identify how to improve search engines to provide searchers with the most effective results.

“We are currently developing methods for search engines to better promote the most useful pages,” he said.

“For example, along with colleagues at the CSIRO, we have developed algorithms that return pages that consumers find easier to understand, while maintaining the relevancy and correctness of the medical information presented.”

Baidu Dives into Indonesia After Pulling Out of Japan

At the end of March, Baidu, the dominant Chinese search engine, quietly pulled out of Japan. The exit from the Japanese market was so quiet that it took a full month for anyone to even notice that the search engine part of thewebsite had been shuttered. In an interview with TechinAsia, Baidu admitted that they hadn’t even updated their index since 2013.

Editor Note: Baidu participated with the author to provide info for this article.

Background on Baidu in Japan

Baidu entered Japan in 2007 as their first attempt at expanding globally outside of China. Initially, there was high hopes that they could be successful in Japan, as Japanese Internet users already use two different search engines, and Baidu wished to become that second search engine. Baidu CEO Robin Li said while they hoped to replicate the success they had in China they would be“very patient.” Clearly they have been because it took them eight years to quit, even though the entire time they were in Japan, their market share hardly budged.

Out with the Old in with the New

While Baidu conceded defeat in Japan, they are significantly increasing their investments in other places across the globe. Last year Baidu entered Brazil, and they are currently operating beta versions of their search in Thailand andEgypt. However, Baidu is making their largest bet in Indonesia, where they aren’t even running a local language search engine. Indonesia is a country many are predicting will be the next India or China as technology accelerates growth in a country where many people live on $1 per day. Indonesia is the fifth most populous country in the world with 250 million people, and it is the 16thlargest economy in the world. In fact, McKinsey predicts that by 2030, the Indonesian economy will overtake both the UK and Germany to become the 7th largest economy.

Baidu in Indonesia

The potential of the market makes logical sense for a big bet by any technology company, but Baidu’s approach is truly fascinating. Baidu entered the Indonesian market with a mobile first strategy and is importing popular apps from China like the Baidu Browser alongside homegrown local tools.

They opened up their first office in Jakarta, Indonesia in September 2013 and by the end of that year they already had 3 million users in Indonesia using their optimization and security software: PC Faster. Currently, the most successful product launched is the DU Battery Saver, a battery optimization app for Android phones, which has been downloaded 14 million times.

Baidu VS Google

A few weeks ago, I had the opportunity to attend Echelon Indonesia in Jakarta, a conference hosted by the popular Asian tech blog, E27.co. The conference was a fantastic display of  what’s happening in technology in this fast growing economy. While in the US, it is pretty standard to see Google as a marquee sponsor of many tech events and conferences, at this event, Baidu was the largest sponsor with Google nowhere to be found. Baidu was at the conference promoting Mobomarket, an appstore for Android, which currently hosts more than 600,00 local apps.

Baidu booth at Echelon. Photo courtesy of e27

Since Baidu has not created a search engine in the Indonesian market, they are not competing directly with Google, yet. Baidu is playing a very long game as evidenced by their focus on mobile. In a country where just 24% of people have ever accessed the Internet becoming the dominant piece of technology on the only computer many people will own, a smart phone, will allow Baidu to effectively hold the line on Google’s potential to grow in Indonesia.

I asked Baidu why out of all countries, Indonesia is so important to them, and Bao Jianlei, Managing Director of Baidu Indonesia, told me the following:

As we can see Indonesia is a growing market and the country has a lot of creative human resources in the digital field. That is why many digital companies from other countries come to Indonesia. Baidu wants to have a long investment in Indonesia and wants to develop the digital ecosystem here together with other players in the industry. We are optimistic we can grow in Indonesia. To achieve the target, we are focused on our localized operations. We also prioritize local features and contents to meet the needs of the Internet users in Indonesia.

To that end, Baidu is heavily integrating into local culture to become almost a native digital company. They are partnering with universities, promoting local apps at startup events, and becoming the source of research about Indonesian users and their smartphones.

What Do You Need to Know

Only time will tell if Baidu’s strategies Indonesia pay off, but in the meantime, if you have any considerations about offering your products or services to the Indonesian market, here are the three things you need to know:

  1. Baidu’s investment in mobile only in Indonesia shows how important mobile is and will be in this market.
  2. Baidu is investing only in Android, even though iPhones are an aspirational product in Indonesia. Entering this market with a mobile strategy means you can probably ignore iOS.
  3. As more Indonesians begin using the Internet, Baidu is poised to become the dominant local player. Don’t concern yourself with Google, just follow Baidu’s playbook and updates.

This NYC Startup is The Only Offline Data Search Engine

peer to peer search engine

Findyr is a global data marketplace and it’s all about people. When you need specific data, you source locals, and they collect the data you need. It’s your go-to when you need accurate data, anywhere in the world.

It’s this simple: request a survey, data point, photo or video at any location. Offer the amount that you’d like to pay.Findyr connects your request with users in the surrounding area.Done, and done with accurate information from locals.It’s a sort of offline Google search – and Fortune 500 are using it to get the localized results they’re looking for.

Founder and CEO Anthony Vinci tells us more about his literally locally-powered search engine.

Tell us about the service.

Findyr is a global information marketplace. On the demand side, users make requests for on-the-ground, local data, surveys, photos and videos. On the supply side, users around the world receive those requests on the Findyr app and provide the information.

In practice, Findyr acts like an offline search engine: you can request information at a location anywhere in the world.

Companies and individual researchersuseFindyr for media acquisition, tracking current events, market research, mystery shopping, sentiment surveys, due diligence, inflation indexes and macro & micro-economic data collection.

How is it different?

Findyr combines the peer-to-peer labor economy with the data economy. In doing so, it creates an entirely new form of information discovery. Rather than rely on costly bespoke research or large subscription datasets, Findyr enables users to get custom information from anywhere. While Google catalogs everything online, Findyr enables people to obtain new information that is offline.

What market are you attacking and how big is it?

We are focusing on companies/professionals in marketing, economic and financial information services. Findyr serves as a new tool that sits alongside an analyst, consultant or trader’s normal Bloomberg terminal, Comscore subscription, Google Search or Wikipedia. The market size is for such information is north of $30bn.

What is the business model?

CEO_Anthony-Vinci-headshot

Findyr is a marketplace. We take a margin on transactions that go through. Our goal is to grow the number of transactions through the system and thereby increase our revenue over time.

What inspired the business?

I began my career in the tech industry here in Silicon Alley at Linkshare.com, an affiliate marketing company, during the first dot.com boom. I went on to retool and did a PhD in International Relations at the London School of Economics where I did fieldwork throughout Africa. During that time I spent a lot of time in places like Sudan and Liberia doing on the ground surveys. When I returned to the US I realized that there was a way to technologically enable the collection of on the ground data and information and, by using a marketplace approach, actually directly benefit those on the ground doing the collection and link them to the global economy.

What types of data are people most often seeking according to your platform?

A lot of transactions on the marketplace involve market research and due diligence. For example, looking at local prices or consumer sentiment. We also provide the basis for local inflation indexes and food price indexes. More and more, we are receiving requests from media and news companies looking for local photos and videos that show what is really going on.

What are the milestones that you plan to achieve within six months?

We are growing our marketplace everyday. The goal is to make Findyr an indispensible tool for researchers, analysts, consultants and media makers. In six months we are looking to increase marketplace transactions, in particular through attracting thousands of individual requesters and hundreds of companies to the site.

What is the one piece of startup advice that you never got?

Persistence is the number one feature of success in this industry.

By the way, a very useful piece of advice that I got from one of my investors – Adam Riggs former CFO and President of Shutterstock.com – is to send user stats to the entire company every hour, every day, so that everyone knows what you’re fighting for and where you’re at in that fight.

If you could be put in touch with one investor in the New York community who would it be and why?

I think that Fred Wilson at Union Square Ventures would be interesting to talk to and I think he might find our business model interesting.

Why did you launch in New York?

New York City is the perfect balance of access to tech talent and to customers. I also like the Silicon Alley assumption that you should actually have a cash-generating business model from day one.

Where is your favorite outdoor bar in the city for a drink when it is actually warm out?

Hands down my favorite place to drink in NY is at the Explorers Club, where I’m a member, and when its summertime we do it on the terrace. Closer to the office, Coffee Shop in Union Square is always a good one.

Flatiron Confirm Request

Startup’s Search Engine Matches Experts to Projects, Globally

Founded in 2011, French startup ideXlab has developed a very specialized search engine, one that shifts through patents, academic papers, specialized industrial databases and even social networks to establish expert profiles and match them to your projects.

OpeniSme logo from Twitter.

OpeniSme logo from Twitter.

Securing financial support through the European Union’s Competitiveness and Innovation Framework Programme (CIP), ideXlab has just launched OPENISME, or the Open Platform for Innovative SMEs (Small & Medium Enterprises).

With this platform, the company aims to create the new tools necessary to support and facilitate innovation among European SMEs, helping them establish new collaborative projects with universities and research centres.

Because large companies often have a budget to sponsor academic research and get involved in dedicated research partnerships, they get a fairly good view of the academic world and manage to build a small network of available experts, yet they can’t be involved in everything that relates to their market. As for small companies, not having the resources to engage with research centres, they often lack this networking capability and can struggle to find the right people to help them innovate.

“Until now in Europe, most collaborative projects were geographically constrained, for example within regional clusters of innovation or poles of competitiveness”, told us Jean-Louis Liévin, co-founder of ideXlab.

“What we can do with OPENISME is bring the right expert to even small companies, not only a knowledgeable person for the very specific problem they are trying to solve, but someone that is available and open-minded enough to be willing to collaborate”, continues Liévin.

“You can type-in a few keywords, the engine then comes up with suggestions to clarify or reformulate your search. It also prompts you to define the sort of expertise you are after. Is it for a brand new idea, for a proof of concept, for prototyping, for a paper study or for IP and patents? In only a few seconds, the search returns about fifty relevant experts, worldwide, and you can ask them all a question, anonymously”

Of course, not all identified experts may answer or may be available, but those who are willing to share and market their expertise will get back to the company and may ask more precisions before committing to a first meeting. But how does the search engine know the persons it suggests to contact are real experts in the first place?

ideXlab’s proprietary algorithms takes into account bibliometric data and all other digital footprints in scientific publications and patents to evaluate the expert’s level of competency.

OPENISME has been operational since the beginning of 2014 and ideXlab has already secured large industrial customers such as Thales, Airbus or Schneider-Electrique.

The reason these large groups finance such an initiative is that it now takes them only a few seconds to identify key motivated experts even outside their traditional business networks, says Liévin. In some cases, it is a conscious decision to look outside the confined box of business-as-usual. With this portal, Liéven also opens the door to SMEs willing to share some expertise. So in the end, he offers a cross-fertilizing innovation portal for companies of all calibres, across all regions of the globe.

For now, ideXlab has struck some partnership in France, the UK, Germany, Italy, Slovenia, Greece and Turkey to promote the platform regionally across Europe. The business model for OPENISME is still in the making. In some cases, ideXlab offers monthly subscriptions for searches as well as some level of technical support. That can include anonymity, in that case, ideXlab plays the intermediary and can even perform a preliminary screening by interviewing the candidate experts.

Liéven is still exploring different business models so the different players get proper remuneration and all benefit from using the platform. He encourages even small companies to market their know-how and hopes that in the future, the OPENISME platform will become the equivalent of an experts’ social network.

Indian search engine Zoolley provides results on Alexa ranking and not profiles

When the term search engine pops up, everyone hits Google, Yahoo or Bing. In fact, Google holds a staggering 68 per cent of the search engine market in the US alone. Baidu and Sogou, the Chinese search engines, too hold strong market share and presence, counted amongst the five biggest search engines.

In the Indian context, one of the biggest search engines of Indian origin is 123Khoj, which looks very similar to Google, and has similar functionality. There is also Guruji, which is said to be the first crawler-based search engine developed in India. This search engine is said to make search simple for Indian users.

Joining this crop in more recent years is Zoolley, which was started in 2012, by an alumnus of SRM University, Chennai, Manoj Kumar Mahto.

yourstory-Zoolley

How is Zoolley different?

Zoolley is an Internet search engine that claims to be child safe that provides organic and general search results. Unlike other search engines, Zoolley does not profile its users, but focuses on getting information from key crowdsourced websites.

Based on the Alexa ranking, Zoolley picks up the top 100 sites of the country. In order to be listed on Zoolley, site owners submit their site details to Zoolley, which is later reviewed by the company and then listed.

We also provide entrepreneurs with cheaper online traffic and leads solutions with dedicated support and advices. This is something which differentiates us from other networks,

Recently, Zoolley acquired Ayah Network, which is a London-based advertising technology platform, and Ad network. This offers traffic based on contextual and behavioural analysis.

Currently, the platform has over four lakh impressions a month, which scales to 451 million views in a month.

The idea and beginnings

Zoolley was created when Manoj was working on his other venture myBCD.  However, Manoj’s love for entrepreneurship started with his first venture RetailXDirect. This venture was started in 2008, during his second year of engineering. Though it wasn’t successful, it gave Manoj the idea of the workings of the online and e-commerce space.

In 2010, Manoj decided to try his hand in entrepreneurship again this time around he set up Mahto BPO, which worked as a backend and strategic support to e-commerce organizations like Amazon and 100bestbuy. This venture worked well, and he soon decided to build a B2B and B2C platform – myBCD.in (My Business Card Directory) in 2011.

myBCD began to grow, and Manoj soon teamed up with Maheshwaram. They soon made a firm known as ME Technologies that promoted myBCD in the South Indian markets. “After six months of our tie up, we found our first angel investor L Ramesh Kumar,” adds Manoj.

It took months to complete the project. They handpicked 11,000+ business cards from Chennai and processed and uploaded it to myBCD.in It was while working on this process that Manoj developed the search engine Zoolley.

After finishing testing the site, Manoj saw that the traction and results turned out to be good. Manoj says: “We felt the need to have an Ad Network to have better connectivity with the audiences around the world. So we built our own Ad server with the help of a US-based company. The product is called ‘Zool Ads Magic’.”

Manoj and his team

Way forward

Speaking of their future plans, Manoj says, “We have built a strong network of advertisers and publishers and in the next five months we are looking at chasing and exploring newer markets across India and the world. Our acquisition of London-based Ayha Network has given us the much needed boost. This acquisition is expected to raise the revenue to almost three times.”

Zoolley aims to develop better Ad science technologies. They are also working on building advanced search features for mobile and electronic words to target the e-commerce space.

Google to Add ‘Buy’ Button to Search Results Within Next Few Weeks

Within the next few weeks Google will be rolling out a “buy” button that will allow people to purchase certain items directly from its search results pages.

A report in the Wall Street Journal indicates that Google’s buy button will initially be available only in mobile search, and will appear alongside paid search ads — not organic search listings.

WSJ’s anonymous sources have explained how the buy button will work. When you click on the buy button you’ll be directed to a special Google page to fill out the usual purchase. Then you submit your payment information directly to Google, where the order will be passed on to the retailer to be fulfilled.

This marks yet another example of Google aiming to be your one-stop solution from everything to finding web pages, to purchasing products, to ordering takeout. Just last week Google introduced a way for US customers to make delivery orders from local restaurants.

As Google expands into the online retail space it’s sure to pose a threat to current industry leaders like Amazon and eBay. Even smaller retailers aren’t thrilled with this move. Some are privately expressing concern that they will lose their brand identity as more customers place orders on a Google page rather than the retailer’s website.

Through this program, retailers will still have the opportunity to invite customers to opt-in to their marketing campaigns, as well as collect customer information. Google will save all customer payment information before passing the sale on to the retailer.

Instead of taking a portion of the retailer’s sales price, like Amazon and eBay, Google will continue to be paid by retailers through its existing advertising model.

Google has not officially commented on its buy button, so an official launch date has yet to be confirmed.

We Tested How Googlebot Crawls Javascript And Here’s What We Learned

google-algorithm-blue-ss-1920

1. We ran a series of tests that verified Google is able to execute and index JavaScript with a multitude of implementations. We also confirmed Google is able to render the entire page and read the DOM, thereby indexing dynamically generated content.

2. SEO signals in the DOM (page titles, meta descriptions, canonical tags, meta robots tags, etc.) are respected. Content dynamically inserted in the DOM is also crawlable and indexable. Furthermore, in certain cases, the DOM signals may even take precedence over contradictory statements in HTML source code. This will need more work, but was the case for several of our tests.

Introduction: Google Executing Javascript & Reading The DOM

As early as 2008, Google was successfully crawling JavaScript, but probably in a limited fashion.

Today, it’s clear that Google has not only evolved what types of JavaScript they crawl and index, but they’ve made significant strides in rendering complete web pages (especially in the last 12-18 months).

At Merkle, our SEO technical team wanted to better understand what types of JavaScript events Googlebot could crawl and index. We found some eye-opening results and verified that Google is not only executing various types of JavaScript events, they are also indexing dynamically generated content. How? Google is reading the DOM.

What Is The DOM?

Far too few SEOs have an understanding of the Document Object Model, or DOM.

When a browser requests a web page

As used in web browsers, the DOM is essentially an application programming interface, or API, for markup and structured data such as HTML and XML. It’s the interface that allows web browsers to assemble structured documents.

The DOM also defines how that structure is accessed and manipulated. While the DOM is a language-agnostic API (not tied to a specific programming language or library), it is most commonly used in web applications for JavaScript and dynamic content.

The DOM represents the interface, or “bridge,” that connects web pages and programming languages. The HTML is parsed, JavaScript is executed, and the result is the DOM. The content of a web page is not (just) source code, it’s the DOM. This makes it pretty important.

How JavaScript works with the DOM interface.

We were thrilled to discover Google’s ability to read the DOM and interpret signals and content that were dynamically inserted, such as title tags, page text, heading tags and meta annotations like rel=canonical. Read on for the full details.

The Series Of Tests And Results

We created a series of tests to examine how different JavaScript functions would be crawled and indexed, isolating the behavior to Googlebot. Controls were created to make sure activity to the URLs would be understood in isolation. Below, let’s break down a few of the more interesting test results in detail. They are divided into five categories:

  1. JavaScript Redirects
  2. JavaScript Links
  3. Dynamically Inserted Content
  4. Dynamically Inserted Meta Data and Page Elements
  5. An Important Example with rel=“nofollow”

One example of a page used for testing Googlebot's abilities to understand JavaScript.

1. JavaScript Redirects

We first tested common JavaScript redirects, varying how the URL was represented in different ways. The method we chose was the window.location function. Two tests were performed: Test A included the absolute URL attributed in the window.location function. Test B used a relative URL.

Result: The redirects were quickly followed by Google. From an indexing standpoint, they were interpreted as 301s — the end-state URLs replaced the redirected URLs in Google’s index.

In a subsequent test, we utilized an authoritative page and implemented a JavaScript redirect to a new page on the site with exactly the same content. The original URL ranked on the first page of Google for popular queries.

Result: As expected, the redirect was followed by Google and the original page dropped from the index. The new URL was indexed and immediately ranked in the same position for the same queries. This surprised us, and seems to indicate that JavaScript redirects can (at times) behave exactly like permanent 301 redirects from a ranking standpoint.

The next time your client wants to implement JavaScript redirects for their site move, your answer might not need to be, “please don’t.” It appears there is a transfer of ranking signals in this relationship. Supporting this finding is a quote from Google’s guidelines:

Using JavaScript to redirect users can be a legitimate practice. For example, if you redirect users to an internal page once they’re logged in, you can use JavaScript to do so. When examining JavaScript or other redirect methods to ensure your site adheres to our guidelines, consider the intent. Keep in mind that 301 redirects are best when moving your site, but you could use a JavaScript redirect for this purpose if you don’t have access to your website’s server.

2. JavaScript Links

We tested several different types of JavaScript links, coded various ways.

We tested dropdown menu links. Historically search engines haven’t been able to follow these types of links consistently. Our test sought to identify if the onchange event handler would be followed. Importantly, this is a specific type of execution point: we are asking for an interaction to change something, not a forced action like the JavaScript redirects above.

An example drop down language selector on a Google for Work page.

Result: The links were fully crawled and followed.

We also tested standard JavaScript links. These are the most common types of JavaScript links that SEOs have traditionally recommended be changed to plain text. These tests included JavaScript links coded with:

  • Functions outside of the href Attribute-Value Pair (AVP) but within the a tag (“onClick”)
  • Functions inside the href AVP (“javascript:window.location“)
  • Functions outside of the a but called within the href AVP (“javascript:openlink()”)
  • etc.

Result: The links were fully crawled and followed.

Our next test was to examine further event handlers like the onchange test above. Specifically, we were looking at the idea of mouse movements as the event handler and then hiding the URL with variables that only get executed when the event handler (in this case onmousedown andonmouseout) is fired.

Result: The links were crawled and followed.

Concatenated links: we knew Google can execute JavaScript, but wanted to confirm they were reading the variables within the code. In this test, we concatenated a string of characters that created a URL once it was constructed.

Result: The link was crawled and followed.

3. Dynamically Inserted Content

This is clearly an important one: dynamically inserted text, images, links and navigation. Quality text content is critical to a search engine’s understanding of the topic and content of a page. In this era of dynamic websites it’s even more important SEOs get on top of this.

These tests were designed to check for dynamically inserted text in two different situations.

1. Test the search engine’s ability to account for dynamically inserted text when the text is within the HTML source of the page.

2. Test the search engine’s ability to account for dynamically inserted text when the text is outside the HTML source of the page (in an external JavaScript file).

Result: In both cases, the text was crawled and indexed, and the page ranked for the content. Winning!

For more on this, we tested a client’s global navigation that is coded in JavaScript, with all links inserted with a document.writeIn function, and confirmed they were fully crawled and followed. It should be noted that this type of functionality by Google explains how sites built using the AngularJS framework and the HTML5 History API (pushState) can be rendered and indexed by Google, ranking as well as conventional static HTML pages. That’s why it is important to not block Googlebot from accessing external files and JavaScript assets, and also likely why Google is moving away from its supporting Ajax for SEO guidelines. Who needs HTML snapshots when you can simply render the entire page?

Our tests found the same result regardless of content type. For example, images were crawled and indexed when loaded in the DOM. We even created a test whereby we dynamically generated data-vocabulary.org structured data markup for breadcrumbs and inserted it in the DOM. Result? Successful breadcrumbs rich snippets in Google’s SERP.

Of note, Google now recommends JSON-LD markup for some structured data. More to come I’m sure.

4. Dynamically Inserted Meta Data & Page Elements

We dynamically inserted in the DOM various tags that are critical for SEO:

  • Title elements
  • Meta descriptions
  • Meta robots
  • Canonical tags

Result: In all cases the tags were crawled respected, behaving exactly as HTML elements in source code should.

An interesting follow-up test will help us understand order of precedence. When conflicting signals exist, which one wins? What happens if there’s a noindex,nofollow in source code and anoindex,follow in the DOM? How does the HTTP x-robots response header behave as another variable in this arrangement? This will be a part of future comprehensive testing. However, our tests showed that Google can disregard the tag in source code in favor of the DOM.

5. An Important Example with rel=”nofollow”

One example proved instructive. We wanted to test how Google would react to link-level nofollow attributes placed in source code and placed in the DOM. We also created a control without nofollow applied at all.

Our nofollow test isolating source code vs DOM generated annotations.

The nofollow in source code worked as expected (the link wasn’t followed). The nofollow in the DOM did not work (the link was followed, and the page indexed). Why? Because the modification of thea href element in the DOM happened too late: Google already crawled the link and queued the URL before it executed the JavaScript function that adds the rel=“nofollow” tag. However, if the entire a href element with nofollow is inserted in the DOM, the nofollow is seen at the same time as the link (and its URL) and is therefore respected.

Ramifications

Historically, SEO recommendations have centered around having ‘plain text’ content whenever possible. Dynamically generated content, AJAX, and JavaScript links have been a detriment to SEO for the major search engines. Clearly, that is no longer the case for Google. Javascript links work in a similar manner to plain HTML links (at face value, we do not know what’s happening behind the scenes in the algorithms).

  • Javascript redirects are treated in a similar manner as 301 redirects.
  • Dynamically inserted content, and even meta signals such as rel canonical annotations, are treated in an equivalent manner whether in the HTML source, or fired after the initial HTML is parsed with JavaScript in the DOM.
  • Google appears to fully render the page and sees the DOM and not just the source code anymore. Incredible! (Remember to allow Googlebot access to those external files and JavaScript assets.)

Google has innovated at a frightening pace and left the other search engines in the dust. We hope to see the same type of innovation from other engines if they are to stay competitive and relevant in the new era of web development, which only means more HTML5, more JavaScript, and more dynamic websites.

SEOs, too, who haven’t kept pace with the underlying concepts here and abilities of Google would do well to study up and evolve their consulting to reflect current technologies. If you don’t take the DOM into consideration, you may be missing half of the picture.

Will Facebook Inc (FB)’s New Search Engine Hit Google Inc (GOOG)’s Business?

Social networking company, Facebook Inc (NASDAQ:FB), is reportedly working on an in-app search engine that will appear on users’ profile pages. The search engine will help users to feed links in status updates without having to visit Google .

Currently, the pilot project will be available to U.S.-based Apple Inc. iOS users, who will see an “add a link” option next to the add photos or a location button. While Facebook is currently at the nascent stages with its search engine experiments, it is expected to intensify the competition forGoogle Inc (NASDAQ:GOOG)’s search engine.

The addition of the in-app search feature is expected to offer centralization for Facebook users. Reportedly, this will help users save around 8 seconds that is spent on opening links, thereby enhancing user engagement on the site.

However, the primary reason behind the recent incorporation of the search engine app is to attract more advertisers toward the platform. According to news reports, Facebook is hosting a new project to deliver content from major U.S. publishers instead of just circulating links on brand pages. Reportedly, the company has been working on this since at least last fall and could launch the project this month.

However, how much the latest initiative by Facebook, which competes with peers like Twitter Inc would hurt Google’s ad revenues remains to be seen. Though Google search engine is way more pervasive in comparison to Facebook search, which only works on the Facebook platform, the customer count for Facebook is on the rise.

However, the Zacks Rank #3 (Hold) company has certain advantages over Google which can  not only help it keep its user within its own ecosystem but also help advertisers figure their target market and offer a better digital marketing platform. One of these is forecasting trends via the number of ad hits or how many times a news has been shared. Reportedly, Facebook had indexed over 1 trillion posts to find out which posts were being shared, and who shared them, offering exclusive data to advertisers.

Further, Google lost mobile ad market share last year, which dropped to 38.2% from 46% in 2013 according to eMarketer. On the other hand, Facebook saw its share increase by a point. The social network is particular strong on the mobile platform and according to market research, users spend about 28% of all Internet time on Facebook.

So if the new in-app search engine becomes a hit, it is likely to help Facebook retain its users within its ecosystem for a longer duration, in return attracting more advertisers and taking more advertising business from Google.

Iran’s New Search Engine Denies Access to Political or Human Rights Content on Internet

ParsiJo-1A new Internet search engine developed by the Iranian government, Parsijoo, has been designed to keep Iranian users from accessing any political or human rights-related content.

According to research conducted by the International Campaign for Human Rights in Iran, the new engine’s display of search results does not follow the norms of other engines used worldwide such as Google or Yahoo, which display and prioritize results according to the number of views, technical aspects of the website design, or ads placed with those companies, nor does it deliver the same content.

Rather, the results are displayed based on the government’s political sensitivities and conform to the state’s rigid censorship of all media in Iran—any objectionable content that departs from the official line is not delivered.

For example, when searching in the video section of Parsijoo for the name of Iran’s Green Movement leader, “Mir Hossein Mousavi,” the user will only be shown a list of state videos containing propaganda against Mir Hossein Mousavi.

Similarly, searching for the name of student activist “Bahareh Hedayat,” or “The Universal Declaration of Human Rights” will display no results. In fact “The National Search Engine,” as Parsijoo is called by Iranian officials, will only make content available to users that matches the state’s version of events, individuals, and issues.

The development of this search engine is the latest step in the government’s efforts to develop aNational Internet in Iran (a domestic Intranet, separate from the global Internet), in which the state controls all access to online content through national operating systems, browsers, applications, online services, and now, search engines.

These steps, alongside the authorities’ online filtering activities and aggressive persecution of Internet professionals and activists, continue to confirm that the state sees the Internet as the principal battleground it its struggle to control the citizenry’s access to information.

In a Mehr News Agency interview to mark Parsijoo’s unveiling on May 5, Alireza Yari, Secretariat of The Local Search Engine Strategic Council within the Ministry of Telecommunications and Information Technology, boasted about the capabilities of Parsijoo’s new version as compared to its predecessors.

ILNA’s May 5 report of the Telecommunication Ministry’s Budget and Planning Office indicated that approximately $27 million has been allocated for “the research on domestic telecommunications and information technology, technology upgrades, and developing national products,” and $40 million has been allocated for implementation of state responsibilities regarding information technology.”

Calling the Internet “a double-edged sword,” President Rouhani said at a Teacher’s Day celebration on May 4, 2015, “Of course, sometimes restrictions are necessary, too. One of the important responsibilities of the Education Ministry is appropriate usage of these tools by the youth.”

These remarks seem to be a noticeable retreat from previous remarks by Rouhani in which he robustly defended Internet freedom. For example, when asked about Internet access in an interview with NBC on September 19, 2013 during his campaign for the presidency, Rouhani said: “The government’s view is that the people should have access to all the information in the world.”  And on September 1, 2014, soon after assuming the presidency, Rouhani asserted, “We cannot shut the gates of the world to our young generation…. Once, there was a time that someone would hide his radio at home, if he had one, to use it just for listening to the news. We have passed that era.”

Is Google Showing Fewer Ads Per Search?

google-mouse-click-money-adwords-ss-1920

Google’s Q1 2015 earning report showed that paid clicks on Google websites were up 25% year-over-year, while average cost-per-click (CPC) on Google websites was down 13%. Unfortunately, these data points tell us almost nothing about the state of Google paid search.

On the company’s earnings call, outgoing Google CFO Patrick Pichette revealed for the first time that, were it not for YouTube TrueView ads, Google “sites clicks would be lower but still positive, and CPCs would be healthy and growing year over year.”

This story fits much better with the paid search trends that a number of industry analysts, including myself, have been seeing — and it’s nice to finally have some clarity on why Google has been reporting hefty CPC declines when so many industry data sources have been showing the opposite for Google paid search.

The shift to mobile as an explanation for declining CPCs was a red herring in recent years, and foreign exchange rate effects could only explain so much, but now we know that YouTube ads are big enough to drop the officially reported growth rate of Google sites CPCs by at least 13 percentage points.

Looking At The Third-Party Data

There are plenty of companies that release reports with data on the paid search industry, and while each of these reports may be subject to its own flaws and biases, taken together, they can shed light on the underlying trends that are affecting all of our search programs.

The data for Q1 2015 has been more consistent than usual on one key point: Google paid search click growth was weak even as click-through rate (CTR) rose. iProspect reported that Google clicks were down 11% Y/Y as impressions declined 35%. IgnitionOne showed clicks up 4% across all search engines, but with impressions down 20% and CPC up 21% (23% for Google). The Adobe Digital Index shows Google spend down 1%, even as CPC rose 6%, suggesting a decline in clicks of nearly 5%, despite CTR rising 18%.

In the latest quarterly report from my company, Merkle RKG, we show Google click growth at 0.2%Y/Y in Q1 as CPCs rose 13%. Impressions were falling 18% Y/Y by the end of the quarter. While factors like the default search provider change for Firefox, slowing tablet growth, and the maturation of the PLA market contributed to slowing overall growth, they do not explain it fully.

Maybe all of these reports are “wrong,” and there is some reason our data isn’t a good reflection of the true underlying trends in Google paid search; but after digging into this question in recent weeks, I’m convinced that there was a significant change in the AdWords market that began to take hold in mid-2014.

The bottom line is that Google may actually just be showing fewer ads, even when accounting for the shift from text ads to Product Listing Ads (PLAs) and from desktop to mobile. This notion runs counter to the popular narrative that the Google SERP has become increasingly overrun with ads, but a number of data trends are pointing in this direction.

Google’s incentive for such a change would be to drive a higher percentage of ad clicks to the ads at the top of the page, which yield them higher CPCs. Showing fewer ads wouldn’t be that different from what they’ve done over the years with the numerous ad extensions that are available and preferentially served for top ads.

Google.com Impression Growth Reverses Course

Looking into AdWords impression trends, my colleague and fellow Search Engine Land columnist Andy Taylor and I found that Google.com non-brand text ad impression growth in June 2014 was 19% Y/Y for the median AdWords program managed by our company.

rkg-q1-2015-paid-search-google-impression-growth

The next month, growth fell sharply to 7%, and by October 2014, impressions were declining by 12% Y/Y. By March 2015, impressions were declining by 18% Y/Y.

We looked specifically at Google.com results to take search partners out of the equation. Other industry sources have pointed to declining search partner impressions as a reason for overall Google declines, and I originally believed this was the most likely cause for the declines we were seeing.

Notably, eBay moved its mobile search ads from Google to Bing in mid-2014; this had an appreciable impact on overall AdWords search impressions, but the impact to clicks was minor.

We also wanted to isolate desktop results, because factors specific to our data set led to above-average growth for mobile traffic in Q3 2014. We do not believe that desktop traffic was impacted at the same time by similar factors.

Importantly, while traffic has been shifting to both mobile devices and PLAs over time, the timing and severity of the deceleration in desktop text ad impression growth does not match up well to what has been a steadier and slower shift to those segments.

Similarly, another consideration here is Google making close variant matching mandatory back in late September 2014. This could have led to increased competition for any given query and driven impressions for our programs lower. We do see impression growth drop sharply in October 2014, but the timing of changes in other data points, and their directional movement, does not fit this picture well.

First Page Minimum Bids Have Risen Sharply, And Average Position Is Higher Up The Page

As Google impression growth stalled and ultimately fell into decline, we saw a concurrent rise in average Google first page minimum bids.

rkg-q1-2015-paid-search-google-first-page-min

Using March 2013 as a baseline, we see first page minimums for non-brand text ads roughly double by July 2014 after being stable if not down slightly into early 2014.

First page minimums increased further as we entered the holiday shopping season, but they have remained greatly elevated into early 2015. We see similar trends for ads across all Quality Score levels with significant traffic.

We would expect to see this type of trend if Google began showing fewer ads per result, competition increased significantly, Google began showing fewer ads for less competitive queries, or Google simply raised minimum bids directly.

It’s difficult to rule out any combination of these factors, but average position trends can point us to more likely scenarios.

rkg-q1-2015-paid-search-google-average-position

In the last year, our average position for a non-brand Google text ad has moved up the page by about half of one position for the median program. Again we see a major shift occur in the second half of 2014 following stable results in the earlier part of that year.

Were increased competition a major factor in driving down impressions and pushing up first page minimum bids, this result seems unlikely.

Will We Ever Know For Sure?

Short of Google itself weighing in to explain the paid search impression decline and weakening click growth that so many of us are seeing, we’ll have to rely upon more circumstantial evidence to figure out the extent to which forces beyond our control are shaping the paid search market. Unfortunately, on the Google earnings call, Patrick Pichette declined to elaborate when questioned about search trends specifically.

This isn’t just idle curiosity, though. If there is external pressure pushing up CPCs and driving down impression and click growth, but advertisers are not aware of what is really going on, they may be prompted to take action in ways that are not helpful — or even harmful — to their search programs.

While no two search programs are alike and each is impacted by a unique combination of factors, having a decently accurate sense of larger industry trends can provide much-needed perspective and context to those individual experiences. At least, I’d like to think so.

Wolfram Alpha Launches Image Identification Search Engine

wolfram-alpha-1920

Reverse image search has been something many search engines have been working on for years and years. Google has the ability to upload an image to their image search and it will return results, so do many niche image search engines. Wolfram Alpha launched theirs today atimageidentify.com.

But their image search identification engine works the Wolfram way but showing you entity data behind what it thinks the image is. Sometimes the results are just amazing and sometimes they make you scratch your head but that is what you’d expect from a reverse image search engine.

You go to imageidentify.com and drag and drop the image on the home page:

wolfram-alpha-image-identification

Then it returns an answer to what it thinks the image is and if it has data on that entity, it will show that information below with a link to see more data at the main Wolfram Alpha search engine.

Here is an example of a good result:

imageidentify-cat

Here is an example of it getting it wrong:

imageidentify-apple-logo

Here is one that is correct but with less data:

imageidentify-apple

Wolfram isn’t stay up to date on the latest tech trends?

imageidentify-apple-watch

But yes, it knows a plane when it sees one:

imageidentify-jetliner

Thank god it thinks I am human:

imageidentify-barry

Google and Twitter sitting in a tree, putting tweets in search results for you and me

In February, Twitter signed a firehose deal with Google to bring tweets right into Google’s search results. This week, that integration started rolling out on mobile, with a promise to also update the desktop version “shortly.”

As I wrote while covering the announcement, Google is hoping to boost its real-time search chops while Twitter is aiming for more users and engagement. Yet at the end of the day, this is one of those rare partnerships that benefit the user first, and the companies later (Twitter is getting an undisclosed amount of money from Google, and Google can potentially monetize tweets with its own ads, but right now it’s unclear if either will be significant).

First and foremost, this partnership means Google users finally have access to Twitter’s stream. Because tweets are often full of timely information, this means searching for anything relatively recent on Google will start bringing up messages sent out on Twitter. Many events often happen on Twitter first, and that’s data that can be incredibly useful to have indexed for users to quickly find.

Twitter’s search engine works, but it’s nothing special. Google is the king of search, and with Twitter data, it’s suddenly about to get even more useful.

Assuming that Google users find the tweets they’re looking for, they will only think more highly of the search engine, and presumably use it more. In this way, Google is next in line as the one to benefit from this new partnership.

You’d think Twitter would be next, but there’s actually one more entity to acknowledge: Twitter users. Yes, we realize there is a big overlap between Google users and Twitter users. Still, if we examine them separately, it’s clear they both win.

Twitter users start to benefit once incoming Google users act on the tweets they find. That can be a favorite, a retweet, or even a reply. If that starts to happen at scale, and Google has plenty of that, Twitter users should see more content on the social network, especially in relation to tweets that Google deems important and relevant.

Last on the list is Twitter itself, if the company can convert all this new traffic into users who keep coming back. Signing up to engage is one thing, but actually choosing to become an active Twitter user is what the company really needs. That’s no easy feat, and so Twitter’s opportunity to benefit from this partnership requires the most work.

Maana Emerges from Stealth with Search Engine for Big Data, Backed by Over $14 MM in Funding

Maana, a pioneer in search engine technology for big-data-fueled solutions, came out of stealth today with its new search and discovery platform. In use at some of the world’s largest corporations, Maana drives significant improvements in productivity, efficiency, safety, and security in the operations of core enterprise assets. The company is funded by Chevron Technology Ventures, ConocoPhillips Technology Ventures, Frost Data Capital, GE Ventures, and Intel Capital.

“The nature of search in the enterprise is changing. The driver behind this change is the need to have preventive and predictive capabilities in operating core enterprise assets,” said Babur Ozden, founder and chief executive officer at Maana. “Such capabilities require connecting technical datasets from a large number of data sources to thousands of employees involved in operations and maintenance of these assets. Increasingly, corporations consider search as the easiest way to implement this connection that is also scalable. Search has proven to be a powerful self-service analytics platform for data from web pages and documents, and now Maana expands it to data from core enterprise assets.”

Why Apple May Be Building A Search Engine Of Its Own

Apple

The Apple logo is seen at the flagship Apple retail store in San Francisco, April 27, 2015. Reuters/Robert Galbraith

First, Apple Inc. threatened to go “thermonuclear” on Android. Then, it kicked YouTube and Google Maps off the iPhone and launched its own Maps app. Everyone knows Apple and Google are bitter rivals, but is Cupertino’s disdain so strong that it would launch a search engine of its own? There are many signs that suggest Apple is up to something when it comes to search.

Twitter CEO Dick Costolo name-dropped Apple during his company’s earnings call this week, saying the two companies are working together to bring Twitter content to Spotlight, the search feature used on Apple’s iOS and Mac devices. Spotlight has been around for years, but Apple began making it a higher priority last year when it brought it front and center to Yosemite, the latest version of its OS X desktop operating system.

Based on numerous job listings on LinkedIn as well as Apple’s own website, the Cupertino tech giant isn’t done building out Spotlight. The job listings include iOS Spotlight software engineers, data scientists for Spotlight suggestions on both iOS and OS X as well as an engineering project manager for something that is labeled simply “Apple Search.”

Apple employee Jamie de Guerre appears to confirm the company’s search engine on his LinkedIn profile, where he wrote, “I lead the Engineering Program Management team for Apple’s new search engine that provides results as you type in Safari and Spotlight.” De Guerre also came to Apple by way of the company’s 2013 acquisition of Topsy, which was a social search startup.

Each time Apple makes Spotlight more robust, it’s one less reason to go to Google.com.

“I think the bigger question is not whether they’re going to do something. It’s, What exactly are they going to do?” said Dave Ragals, global managing director of search for IgnitionOne, a digital marketing firm.

Why Apple May Enter Search

All this activity comes as Google’s deal to be the default search provider for Apple’s Safari Web browser is set to expire later this year. “It’s no longer a rumor,” said one source who works in the in-app search space and who is familiar with the companies but who was not authorized to speak publicly on the matter. “It’s an accepted fact that Apple is going to move away from Google Search.”

For Apple, there are several reasons to leave Google behind. For starters, breaking into search could present Apple with another promising revenue stream — a market that is expected to be worth nearly $82 billion in 2015, according to eMarketer. But perhaps the most alluring reason would be how much this could hurt Google. Kicking Google off of Safari could cost the search giant billions of dollars annually. And if Apple entered the space, the company would eat into its chief rival’s core market.

Already, Apple has ditched Google search when it comes to Siri, the voice assistant found on both the iPhone and the iPad. Siri, which can be used for voice search queries, relies on Microsoft Bing for its search results.

How Apple Could Enter Search

The question is, How would Apple go about building a search engine? Creating a hybrid engine built in partnership with a Google rival like Microsoft Bing or Yahoo is one option — in fact, Yahoo CEO Marissa Mayer has said many times publicly that she would love for her company to replace Google on Safari. Or Apple could instead choose to partner with multiple smaller companies, such as Yelp and Twitter, and leverage their data for its search engine.

And of course, Apple could simply build a search engine from nothing. That’s the most difficult of these options. It would require huge amounts of data, a vast workforce and a lot of money, but if there’s one company that has all that, it’s Apple.

But if Apple does enter search, it will likely not try to compete with Google head-on. The Mountain View, California, tech giant dominates Web search and won’t likely be dethroned any time soon, but Google has struggled when it comes to mobile. Unlike with websites, it’s pretty hard for search engines to index information found within apps, which leads many smartphone users to skip Google and instead search apps directly.

A handful of companies, including Google, are working to make the data within apps easier to search, but no one has yet won that war. “That’s the disadvantage of apps — they’re each their own silo,” said Jonathan Opdyke, CEO of HookLogic, a company that provides advertising for search engines on retailer websites. Creating a search engine capable of finding and comparing data from multiple shopping apps, for example, would be beneficial for iPhone users, he said.

But Apple is rarely one to do the obvious, and search itself seems ripe for reinvention. “To me the question is, Are we thinking too two-dimensionally as far as what search means,” Ragals of IgnitionOne said. Several experts suggest that Apple may choose to skip text search altogether and instead build an engine that relies more on contextual cues and predictive algorithms. That’s a method other companies, including Google, have tried, but no service of this kind has gone mainstream thus far.

But consumer apps have been the scene of some of Apple’s greatest recent flops. Apple Maps was such a bad product at launch that CEO Tim Cook ended up apologizing for it and firing the head of the product. Some of the company’s other software services, such as iCloud, also have failed to impress users.

Whatever route Apple takes, there are numerous signs that seem to suggest the company is getting more serious about playing a bigger part when it comes to the search queries conducted through its devices, and there are just as many reasons for Apple to jump into the market, which has become more competitive than it’s been in more than a decade.

“It makes a lot of sense for Apple to do their own search engine,” said Adam Epstein, president of adMarketplace, a search ad company. “They can enhance user experience because they can starve Google for some revenue and because they can possibly open up a huge new revenue stream for themselves. It would be a huge mistake for Apple not to take search very seriously.”

Iceland Launches Human Search Engine to Educate Tourists

Iceland Launches Human Search Engine to Educate Tourists

The ‘Ask Guðmundur’ campaign features representatives from different Icelandic regions with expert knowledge to impart

Iceland has launched a new campaign that sees the introduction of the world’s first human search engine. ‘Ask Guðmundur’ is a unique service offering a personal search platform for tourists to find out some of the country’s hidden secrets.

It provides a human alternative to traditional online search engines, with seven Icelanders named Guðmundur (male) or Guðmunda (female) across Iceland volunteering for the chance to offer insider knowledge, advice and local secrets to tourists who want the ultimate Iceland experience.

The volunteers share one of the most popular names in the country and aim to provide a truly personable service. They are representatives from each of Iceland’s seven regions and will offer their insider knowledge to the world via Inspired By Iceland’s social media platforms.

The service kicked off on April 28 with ‘Guðmundur of the North’, Guðmundur Karl Jónsson, who is a keen skier and golf lover. He encouraged people from around the world to submit any questions they have about Iceland, such as ‘What is the best place to visit in the North?’ or ‘Is it always cold in Iceland?’

guomundur-westfjords.jpg

Ask Guðmundur runs from spring to fall and questions can be submitted through Inspired By Iceland’s Facebook and Twitter accounts using the hashtag #AskGuðmundur. Weekly videos will also air on YouTube. Inga Hlín Pálsdóttir, Director of Tourism & Creative Industries at Promote Iceland, said:

So many people have questions about our wonderful country, so why ask a computer when you can ask Guðmundur?

We’re hoping that the world will embrace this new, personal service and will share their questions with our human search engine. We like to offer a personal approach in all we do and this service offers just that. Each of our seven Guðmundurs is a specialist in their region and are excited about answering the world’s questions about Iceland in a truly human way.

Iceland is a small country with a wealth of secrets and now Ask Guðmundur will help us share them with the world.

Bing to Index Content from iOS, Android, and Windows Apps

Bing intends to expand its index of search results to include content from mobile apps, the company stated in an announcement today. In fact, Bing is building what it calls a “massive index” of content from apps across all three major platforms — iOS, Android, and Windows.

Content on mobile apps typically exists in a vacuum, meaning its not crawlable, searchable, or indexable outside of the app itself. Google has made some progress with respect to indexing mobile app content, but even then it’s limited to only select Android apps.

Bing wants its users to be able to find mobile app content on its search engine regardless of what type of device they’re on. Bing calls this new functionality App Linking, and explains how SEOs can take advantage of it to encourage the installation of new apps:

“At first, you may ask yourself: how do I apply my SEO chops to apps? How can I as the web search person help drive app installs through search? Isn’t the app world all about app developer? And your app developer may be thinking “how can the web search folks help me”? Could it be there is a place where these worlds meet? There is, and it’s called App Linking!”

Bing outlines in its blog the technical markup SEOs and developers need to apply in order to make app content indexable and searchable. If you’re already familiar with using Schema markup on your website then these steps should feel relatively familiar.

If you want to take advantage of Bing’s App Linking, the company says the best time to apply the markup is right now. Bing is already crawling the web looking specifically for App Links and actions markup in order to build up its index of app content.

No exact timeframe was given for when searchers can expect to see app content indexed in Bing, though the company said it expects to start applying App Linking to its search results soon.

New search engine from Waterfox founder aims to take a punch at Google

The young developer behind web browser Waterfox, which boasts 4m downloads, is now hoping to create a viable rival to Google’s ubiquitous search engine by offering users absolute privacy online, and directing cash to charities.

Alex Kontos, who coded Waterfox in a month aged just 16 back in 2011, has junked the traditional advertising revenue model used by the search giants in favour of a social enterprise model.

With Storm, his new engine, a small percentage of any purchases made with participating e-tailers is given to a growing roster of charitable organisations. The charity will share a proportion of that commission with Waterfox and Storm to fund their ongoing operation.

User data will also be completely anonymised so that third parties cannot keep tabs on an individual’s searches.

The aim is to tempt millions of users away from Google and create substantial revenues for worthy organisations. Up to £20 could be generated from each active user per year for charitable causes, the company claims.

Kontos, now 20, conceived the business model with venture capitalist Andrew Crossland, who has invested in the project, and the company’s new chief executive, Kevin Taylor, former CEO of internet security giant Symantec.

The company is offering a white-label version of its search engine to charities, social enterprises and societies, which will allow these organisations to stamp their branding on the engine and share it with their communities.

“That’s the key to scaling quickly,” says Taylor. “You can’t just go to a marketplace and say, ‘use this platform’ but if you put a wrapper around it for brands with large communities, and they tell people to use it to raise revenue for a good cause, we’ll grow much faster.”

Alex Kontos, 20, coded Waterfox in a month

Last month, the EU accused Google of cheating competitors by distorting internet search results in favour of its Google Shopping service.

The European Commission has also launched a competition inquiry into its Android mobile operating system. “People are tired and jaded by the corporate West Coast alternatives,” says Taylor. “We want to be a disruptive force.

“As a search engine we’re not seeking to make money out of PPC, which is the primary mechanism that other search engines use to make money. That shouldn’t be a sole purpose.”

Kontos and Taylor are currently in talks with “a number of household names” from the charity sector. “One medium-sized charity with a royal patron is likely to sign this week,” says Taylor. The pair are keen to work with any large, branded charity. “We have a moral dilemma over what extent we choose the organisations that benefit,” says Taylor, who reveals that a Premier League football club has also been in touch to fund its sports club activities.

“As long as we give the user the opportunity to donate to a wide range of charities, then we’re being fair. But when a certain charity is marketing a branded version of our engine to its community, the money will only go to them, which we’re OK with.”

Storm, which is powered by Yahoo!’s search engine, will also plug in other innovative search providers to ensure a quality user experience, according to Taylor. “The search results will be coherent and useful, and cut out all the non-specific advertising that people are currently bombarded with on other providers.”

The search results will identify which retail partners are participating in the scheme, so that users can opt to shop on those sites.

Kontos has been coding since he was 12, and created Waterfox when he found that existing web browsers were too slow – “like driving a sports car that was stuck in second gear,” he says.

To date, Kontos has generated no revenues from Waterfox, despite working on it “every single day” since the browser’s launch. “This new search engine is a way to pay the bills and do something good,” says Kontos, who is aiming to attract 10m users to the engine within two years.

User information is harvested, stored, analysed and frequently monetised by all kinds of organisations, prompting millions of sophisticated web users to move away from Google’s incumbent engine to maintain their digital privacy.

Online privacy has become big business; Jack Cator, the founder of Hide My Ass!, which allows users to surf the internet anonymously using virtual private networks, recently sold his business to internet security firm AVG for £40m. The desire for online anonymity has prompted an indie internet search engine revival.

Most of the successful entrants offer users the ability to search the web privately and securely, hiding their data from brands and data crunchers online. DuckDuckGo, which brands itself as a champion of privacy rights, has now been included on Apple’s internet browser Safari. Qwant, StartPage, and Ixquick are also vying for market share in the private browsing space.

Kontos is on a wider mission to give power and privacy back to the user. “I don’t know how many people are bothered by what goes on behind the screen, but it’s important to fight for people’s rights whether they care about them or not,” he says.

His Waterfox browser, which receives 250,000 active daily users, keeps the user’s IP address and browsing history private and secure. “We offer complete anonymity,” says Kontis. “It’s a moral issue. You don’t want to feel that there is always someone looking over your shoulder when you’re online, so I removed all the tracking features.”

Baidu, China’s Leading Search Engine, Makes Strategic Investment In Content Recommendation Platform Taboola

Baidu, the maker of China’s largest search engine, has made a strategic investment in content recommendation startup Taboola. The companies declined to name the exact amount of the deal, but said that it is in the “multi-millions.”

Taboola serves up the links in the “Around The Web” and “Recommended For You” sections you see at the bottom of articles on sites such as The Atlantic, Business Insider, and Mail Online. Baidu’s stake is a follow-on to the $117 million Series E round led by Fidelity Management that Taboola (which competes with Outbrain) announced in February at a reported valuation of almost $1 billion.

At that time, chief executive officer Singolda told TechCrunch’s Ingrid Lunden that the company’s top priorities include expanding into more international markets.

The potential synergies between Taboola and Baidu are obvious. Baidu can use Taboola’s tech to build its knowledge graph, while the deal represents a way for Taboola to break into the growing Chinese market, which now has an Internet penetration rate of 47.9 percent.

Baidu claims a 75 percent share of China’s combined PC and mobile search market and says it powers tens of billions of search queries every day. Taboola, which was founded in 2007, says that it now delivers more than 200 billion monthly content recommendations to 550 million users.

While Taboola’s research initially revolved figuring out how to deliver relevant content for users on desktop sites, the company is now trying to figure out how to map data from other sources, including mobile devices, social media sites, and apps.

This aligns closely with Baidu’s current business strategy. At the end of April, Baidu reported in its Q1 2015 earnings report that mobile accounted for more than half of its quarterly revenue for the first time ever, a milestone it spent two years working toward. The transition, however, has had its share of growing pains, with Baidu’s revenue and net profit both declining year-over-year.

Working with Taboola can help Baidu in its aggressive push to get more revenue from its mobile products.

“We’re definitely a mobile company first now and everything we do begins with mobile and takes priority over our PC products,” Baidu spokesman Kaiser Kuo told TechCrunch. “We’re not ignoring PC, but in just eight quarters, we built a mobile business that is the same size as our PC business, which took 15 years to build.”

Unlike the U.S. and Europe, where sponsored links are standard fare for major sites, there are few companies in China that provide the same kind of services as Taboola. Taboola’s other moves into Asia include a strategic partnership with Yahoo! Japan, which it inked last year. The company now powers content recommendations across the Yahoo! Japan News site network.

In a prepared statement, Taboola founder and chief executive officer Adam Singolda said “we believe that discovery has massive growth potential in both existing and untapped markets around the world, and we plan to grow this new category even further with Baidu to help change the way people in China discover content they may like and never knew existed.”

While Taboola is headquartered in New York, its research and development team is based in Tel Aviv. This makes it the third company with Israeli operations that Baidu has invested in so far (the others are music app maker Tonara and video tech developer Pixellot), following a trend that sees major Chinese tech companies pouring serious yuan into the country’s startup scene.

New Search Engine to Promote Christian and Family Oriented Websites and Businesses

BRANDON, Fla., May 23, 2015 /Christian Newswire/ — TrueSearch.today is a new search engine that promotes Christian and family oriented websites and businesses. They are in the early stages of development and are asking all Christian churches, and Christian and family oriented websites and businesses, to please submit their website to TrueSearch for inclusion in search results.

Small and startup businesses are especially encouraged to submit their website to TrueSearch.

Website and business owners must affirm their acceptance of Christ Jesus as Lord and Savior and answer a few Bible questions before being permitted to submit their website address for inclusion in search results. Once a website is submitted, it generally takes a few days before the website begins to show up in search results on TrueSearch.

People interested in adding a website to TrueSearch are asked to visit www.TrueSearch.today and click the ‘Add a Website’ link at the top of the page.

Bing Will Roll Out Their Mobile Friendly Ranking Algorithm In The Upcoming Months

Following Google’s lead, Bing announces they will launch a mobile friendly algorithm, but this one won’t be any Mobilegeddon.

mobile-smartphone-algorithm-seo-ss-1920

Bing has announced they will be introducing their own version of mobile friendly ranking signals in the upcoming months. With that, Bing explained how they determine if a web page is mobile friendly, when they will add the mobile friendly label to your site and what tools they have to help webmasters ensure their sites are mobile friendly.

Unlike Google, Bing has not specified a date for when the mobile friendly algorithm will launch. Instead, Bing is taking a slower approach to it, in order to make sure to get webmaster feedback along the way. Bing team told me that they are doing this in order to better communicate the changes, over time, before it happens, to reduce potential anxiety with this change.

Bing has seen a shift in the mobile space in the past year and has focused their efforts over that time in building out mobile friendly factors for Bing mobile search.

Bing Mobile Friendly Tag

Last month, we discovered Bing testing the display of the Bing mobile friendly label in the Bing mobile search results. Bing said they have seen “great feedback” after testing this new label and based on this, they will be rolling out the label more broadly to mobile searchers.

Here is a picture of how it appears in the mobile results:

bing-mobile-friendly-label

You can assume that if your site shows the mobile friendly label in the Bing results, that Bing recognizes your site as being mobile friendly and you would benefit when Bing pushes out this new algorithmic update.

Bing’s Mobile Friendly Algorithm

The new algorithm seems to work a lot like the Google Mobile Friendly update, which launched on April 21, 2015. But unlike Google, Bing won’t yet give a date for when their algorithm will go live. They want to give webmasters time to consume this news, as well as the Bing mobile friendly label and their previous advice around mobile friendly ranking techniques from last year.

Bing said, relevancy will always trump mobile friendly. So even sites that are not mobile friendly but are more relevant to the query can and will most likely still rank very well and better than non-mobile friendly web sites. This is because they are looking to strike the right “balance” between relevancy and user friendly search results.

Bing again has not provided a date for the launch, they also are downplaying the impact this will have on the mobile results. They obviously do not want this to turn into a Mobilegeddon media frenzy. Bing has not shared how many pages in their index are currently mobile friendly, or the potential impact this may have. But they did tell me they will share the date this will go live prior to rolling out the new algorithm.

Bing’s mobile friendly algorithm will be real time, so when you go mobile friendly, as soon as Bing crawls the new mobile friendly version of your page – you will benefit from the ranking algorithm.

Bing Webmaster Tools Mobile Friendly Tool

Bing is also launching a new tool later this summer for webmasters to test their sites. The tool will help “Webmasters to analyze webpages using our mobile friendliness classifier and help them understand the results.” The tool will likely work a lot like Google’s mobile friendly testing tool by providing a yes or no answer to if your site is mobile friendly and then suggestions on how to make your site mobile friendly if the answer is no.

The results of the tool should match what you see in the Bing mobile results with the mobile friendly label. With Bing, you are either mobile friendly or you are not, just like Google, there are currently no levels of degrees of mobile friendless.

The tool should launch sometime later this summer.

How Does Bing Determine If Your Page Is Mobile Friendly?

(A) Clickability of the navigation and buttons on your web site is one aspect. Are the buttons easy to press with a finger? Are they spaced out enough? Will users click on the wrong link by accident because the site isn’t designed for mobile users in mind?

(B) Can you easily read the content on the web page without having to zoom in or scroll the page left and right? The pinch and zoom desktop sites on mobile devices are not mobile friendly. You can define the font size and view settings in your HTML with viewport settings and so forth.

(C) Scrolling up and down is expected on mobile devices but scrolling left to right is not. You don’t want to have a side that you need to scroll left to view more of the content. It is not what users expect on mobile devices and is not user friendly.

(D) Does the content load on mobile devices? Flash is an example of content that is not mobile compliant. Flash does not render on mobile devices like iOS or Android. If your site cannot render on mobile devices, it is very likely that Bing will not show it to mobile users.

(E) Don’t block your CSS, JavaScript and other external resources from Bingbot or Bingbot mobile. The bots need to crawl these resources and files to render your full page in order to determine if it is mobile friendly. Blocking these files will prevent Bing from understanding your layout and prevent them from labeling your web pages as being mobile friendly.