It looks like Facebook is finally taking search more seriously. The company is reportedly working overtime on improving its own search feature, which leads us to wonder if it may even have something bigger up its sleeve. We’ve written about the major opportunities Facebook hasto make a big play in the search engine market and go head-to-head with Google several times in the past, and this news does very little to convince us this is not a possibility.

Bloomberg BusinessWeek reports that something like 24 Facebook engineers are working on “an improved search engine,” and the effort is being led by former Googler Lars Rasmussen. Interestingly enough, while I was working on this article I happened to get an email from a Googler pointing the report out to me. They didn’t say as much, but Google no doubt wants more attention brought to the fact that other major web entities have opportunities to compete with them. The EU is expected to make a decision in an antitrust investigation into Google as soon as after Easter.

Should Facebook create a full-on search engine to compete with Google? Let us know what you think.

Bloomberg cites “two people familiar with the project” as providers of this info. Presumably they are from Facebook itself, as the report says they didn’t want to be named because Facebook is in its pre-IPO quiet period. “The goal, they say, is to help users better sift through the volume of content that members create on the site, such as status updates, and the articles, videos, and other information across the Web that people “like” using Facebook’s omnipresent thumbs-up button,” the report, co-authored by Douglas MacMillan and Brad Stone, says. Emphasis added.

That last part is particularly interesting, but more on that later.


If you use Facebook (and given that Facebook has over 800 million users, there’s a good chance you do), you probably know that its search feature isn’t the greatest or most efficient tool for finding information. Sure, there are plenty of options to refine your search. You can view results by: all results, people, pages, places, groups, apps, events, music, web results, posts by friends, public posts, or posts in groups. Even still, the results are often unhelpful – even the filtered results.

Facebook Search Results

Given Facebook’s enormous user base and all of the content that is posted to the social network every day, a competent search engine is needed badly. Just think how much more useful Facebook would be if you could easily use it to find things. As a business, think about how much better Facebook could work for you if you could better optimize for its search feature, and it delivered your product or service’s page to people searching with relevant needs – or perhaps better yet, when their friends are talking about or checking in at your business.


Again, there are a reported two dozen engineers working on improving Facebook’s search feature. It sounds like they’re really putting a lot of time and effort into it now. If it turns out to be a major improvement and is that useful, competing with Google for searches seems inevitable at one level or another.

Consider the emphasis Google and other search engines are putting on social these days. Earlier this year, Google launched “Search Plus Your World,” delivering results much more based on your social circles – particularly your Google+ circles. One major flaw to this approach is that people just aren’t using Google+ the way they’re using Facebook, no matter how Google chooses to deem a user an active user.

For many people (about 800 million or so), a Facebook search engine would much more closely resemble “search, plus their world”.

There are quite a few interesting angles to consider, should a true Facebook search engine become a reality. Would it be available only to users? Facebook has a whole lot of public content. Being signed in would only serve to make the results more personalized – kind of like with Google today – the main difference being that personalization with Facebook data is much more likely to be relevant than personalization based on Google+ interaction. This is not a slight on Google+ as a service. It’s just a fact that Facebook has been around for far longer, and has way more active users who engage with their closest friends and family members on a daily basis, sharing tons of photos, videos, status updates and links to web content.

Would Facebook even bother to index the public web the way Google and its peers do? Right now, Facebook uses Bing to pad its search results with web results. Facebook could continue this indefinitely, or it could simply compete with Bing too, somewhere down the road. Facebook doesn’t need to index the web the way Google does, however, to put a dent into Google’s search market share. Even if it can convince users to use its own revamped search feature for certain kinds of queries, that’s queries that users don’t need Google for.

I’ve long maintained that the biggest threat to Google’s search market share is likely not the threat of a single competitor, but the diversification of search in general. People are using more apps than ever (from more devices than ever), and just don’t have to rely on Google (or other traditional search engines) for access to content the way they used to. Take Twitter search, for example, which has become the go-to destination for finding quick info on events and topics in real time. When was the last time you turned to Google’s realtime search feature? It’s been a while, because it’s been MIA since Google’s partnership with Twitter expired last year. Sometimes a Twitter search is simply more relevant than a Google search for new information, despite Google’s increased efforts in freshness.

Google may even be setting itself up to push users to a Facebook search engine, should one arise. There has been a fair amount of discontent expressed regarding Google’s addition of Search Plus Your World. Much of this has no doubt been exaggerated by the media, but there isdiscontent there. What if Facebook had a marketing plan to go along with this hypothetical search engine? It shouldn’t be too hard for them to play that “search plus your actual world” angle up.

They’ve already done this to some extent. Not officially, exactly, but remember “Focus On The User” from Facebook Director of Product Blake Ross (with some help from engineers at Twitter and MySpace)?

And speaking of Twitter and MySpace, who’s to say they wouldn’t support a Facebook search engine, and lend access to their respective social data to make an even bigger, highly personalized social search engine? That could be incredibly powerful.

A conversation between two Business Insider writers would suggest that we won’t see Facebook as a “favorite web search engine any time soon,” but again, it doesn’t have to replace Google to make an impact.

About a year ago, we talked about a patent Facebook was awarded, called, “Visual tags for search results generated from social network information”. The description for that was:

Search results, including sponsored links and algorithmic search results, are generated in response to a query, and are marked based on frequency of clicks on the search results by members of social network who are within a predetermined degree of separation from the member who submitted the query. The markers are visual tags and comprise either a text string or an image.

That’s something else to keep in mind.


There’s certainly plenty of opportunity to sell more Facebook ads (which are already getting pretty popular with businesses). It’s going to be much more about revenue at Facebook in the post-IPO world. Facebook is already superior to Google in terms of ad targeting by interest and demographic, as users can be targeted based on very specific things they have “liked”. Add search to the mix, and you also get users while they’re actively seeking something out – Google’s strong point. That’s the best of both worlds.

Facebook won’t have to please shareholders by showing that it can be a better search engine than Google, but if they can create a search engine or even just an internal search feature that people want to use, there is a huge opportunity to make plenty of revenue from that. It just may also result in some portion of searches that may have otherwise gone to Google (or Yahoo, Bing, Ask or whatever) to go to Facebook instead, along with more cumulative time spent on Facebook.

Who knows? It may even set the stage for an AdSense-like ad network based on highly targeted Facebook ads. Again, revenue is going to be more important to Facebook than ever after the IPO.

Ex-Googlers Help Build ‘Boodigo,’ a New Search Engine Just for Porn

A couple of years ago, porn producer and director Colin Rowntree started feeling that mainstream search engines, like Google and Bing, where “trying to ghettoize the adult entertainment industry,” as he puts it. Google “blow job,” and you generally have to click through three pages of totally un-sexy content — you know, like a Wikipedia entry on the phrase’s etymology — before you actually arrive at the porn sites you were looking for in the first place. “It’s this corporate cowardice thing that’s been going on the last several years,” he told me over the phone. “Mainstream companies that have investors [or are] trying to go public [are] trying to distance themselves completely from adult entertainment.” Fair. (Screengrab: Google) Mr. Rowntree cites tumblr as the most recent example; a few months ago, the site made it significantly harder for users to search for pornographic content — despite the fact that the site is chock-full of it. Proving that necessity is the mother of invention, Mr. Rowntree, along with some fellow experienced content producers and web masters, created Boodigo, a new search engine specifically geared toward people looking for porn. Besides its commitment to turning up un-pirated, non-virus-infected search results, Boodigo also promises users it won’t harvest any of their personal data to sell to advertisers. It lets people “find legitimate, legal, non-scary, non-damaging content for their adult entertainment needs,” Mr. Rowntree said. Boodigo is a partnership between porn company Wasteland (NSFW) and west coast tech firm 0x7a69. Five of the west coast programmers, Mr. Rowntree said, are “refugees from Google” who were “not liking the way things were going.” Now, they’re putting their search engine programming experience toward Boodigo. Mr. Rowntree noted, however, that “they didn’t replicate Google’s logarithms even a little bit” — they wrote Boodigo’s code from scratch, line by line. There are plenty of reasons to search for porn on Boodigo — besides the obvious convenience of not having to wade through 289 Cosmo articles in your quest for the perfect porno. The site, as I noted earlier, aims to only populate its search results with legal — in other words, not pirated — porn. As a long-time producer and director, Mr. Rowntree has experienced the hugely damaging effects of porn piracy first-hand. When people share and view adult entertainment without paying for it, “it’s horrible for the performers, for the studios, for everybody,” Mr. Rowntree said. Though he acknowledges that chasing illegal online porn distributors will “always be a cat and mouse game,” he’s certain that Boodigo has “done a very good job at identifying the illegal tube sites.” “They’re basically just blacklisted from ever getting into the search results,” he said. Another plus is the site’s claim that is doesn’t track any of your personal data. As the About page explains, Boodigo does not use cookies or other user-tracking technologies to gather information about our users. We aren’t interested in building a ‘profile’ on our users; our core mission is simply to help you find what you’re looking for in a way that’s as efficient, effective and enjoyable as possible. In other words, using Boodigo means finding what you’re looking for without having to worry about what someone else might be finding out about you. Plus, using a separate site to search for your porn also saves you from that awkward moment when somebody borrows your computer to Google something, only to find that a search for “best food” autofills halfway through to “best food fetish vids.” “It’s not filling up [your] browser history with something [your] grandmother might find,” Mr. Rowntree said. We asked Mr. Rowntree how Boodigo would earn money, if not by selling its user data to advertisers. Eventually, he said, when the site starts bringing in more traffic, they’ll allow companies to bid for advertising space triggered by keywords entered by users — like what Adwords does, he said. Boodigo also comes with another added bonus, particularly for anyone upset by the aforementioned news about tumblr’s porn searching. Because of tumblr’s new restrictions, this is what happened when I searched the site for “porn for women”: Womp womp. (Screengrab: tumblr) Not only does Boodigo let users search tumblr’s porn collections, but it even has a separate search tab exclusively for that purpose: (Screengrab: Boodigo) And now, check out what happened when I searched for the same thing — “porn for women” — in Boodigo’s tumblr search: The search turned up 56,128 results! (Screengrab: Boodigo) All that being said, be warned: as the search engine is brand new, it does still have some kinks. When I randomly searched for “cats,” for instance, I did find links to sexy sites like Alt Pussy Cats and Busty Cats, but I also found a link to this very un-sexy (though VERY cute) tumblr that’s basically just a series of photos of cats on boats. But if people are using the site to find porn, “they’re probably not going to be using it to search for cats,” Mr. Rowntree pointed out. Touché. I asked Mr. Rowntree how he saw Boodigo expanding in the future. “We might end up experimenting with some kind of anonymous instant messaging service as an alternative to Skype or Google Chat,” he said. “The obvious name for that will be Boodicall.” Read more at Follow us: @observer on Twitter | Observer on Facebook Read more at:

Baidu’s O2O Expansion And Its Search Business Prospects


  • Baidu shares are hovering near 52-week lows due to concerns over the impact on earnings from the company’s expansion into China’s O2O market.
  • The company’s search business is strong and its moat is likely to be maintained in the long run thanks to heavy investments in deep learning.
  • O2O could significantly boost the company’s core search business and offers an avenue to improve mobile monetization.
  • Thus, although O2O would impact earnings in the short to medium term, the company’s long term prospects from this expansion are likely to be positive.

Baidu’s (NASDAQ:BIDU) shares have fallen over 15% from a year ago and are trading in the lower end of the 52-week range, a result of investor concerns over future earnings as the company aggressively expands into the online-to-offline (O2O) market.

(click to enlarge)

Source: Google Finance

This piece aims to look at Baidu’s O2O expansion and its long term impact on the company’s core business of search. Please take note this is only one factor in determining the attractiveness or non-attractiveness of Baidu as an investment and should not be used independent of other factors.

China today is the world’s largest internet market, with over 640 million Chinese connected to the internet representing about 22% of the world’s internet users.

Yet, China’s internet penetration stands at about 46%. Compare this with the United States, Japan, Germany and the United Kingdom – a few examples of nations with penetration rates exceeding 85%. This indicates promising growth potential for Baidu which over the past decade has been China’s dominant search provider and the customer’s number one choice for online maps and search.

Although new entrants may eat into market share, Baidu’s domination in the search market is likely to continue over the long run. Baidu is China’s leading search provider with a market share exceeding 80% by revenue and is theleading website in China by traffic in 2014.

Source: China Internet Watch

In an effort to reinforce its leading position, Baidu has been aggressively investing in deep learning – a branch of artificial intelligence that aims to make computers “learn” for themselves. Baidu joins a handful of companies around the world actively pursuing this promising technology, Google, IBM and Microsoft are some notable examples. Last year, Baidu hired Stanford artificial intelligence professor Andrew Ng away from Google where Ng co-founded the Google Brain project, a deep learning research project at Google.

The aims of Baidu’s deep learning pursuits are two-fold – enhance user search experience and lift search revenue.

User experience is improved by way of improved accuracy of speech recognition, accuracy of image search and improved search relevance, particularly for long queries. Baidu expects 50% of web searches expected to be through voice rather than text in five years’ time (from about 10% currently) and speech recognition accuracy would provide a competitive edge for Baidu’s search engine in the future. Increased accuracy in search results translate into a better quality search engine which in turn attracts more users.

Improved revenue is a product of improved advertising relevance. Deep learning holds the potential to greatly predict and display the most relevant advertisements to the user thereby improving the click-through rate. A higher click-through rate translates into better revenue for Baidu.

The investment in deep learning has produced results; a 25% reduction in the error rate for speech recognition, a 30% error rate reduction for optical character recognition, a 95% success rate for facial recognition and a”significant increase” in the company’s click-through-rate.

Deep learning consists of an array of capital intensive technologies, the cost of which is beyond the financial capabilities of smaller internet firms. With Baidu harnessing the power of these technologies to improve its core business, the company is in effect widening its moat in the search business, enabling it to retain its position in the long run as leading search provider and increasing its ability to monetize search.

Search revenues have been projected to post strong though decelerating growth rates and revenues are expected to almost double by 2018.

Source: iResearch

With more Chinese internet users switching to mobile, mobile search revenue will be a key growth driver.

As at end of 2014, about 85% of internet use in China is through mobile phones and mobile internet users have been growing faster than China’s total internet users; in 2014, China’s mobile internet users grew by 57 million compared to an increase of 31 million in China’s internet user population. Growth in 2015 has been strong as well; in the first six months of 2015, China’s mobile internet population increased to 89% from 85% at the end of 2014.

(click to enlarge)

Source: China Internet Network Information Center (CNNIC)

Baidu commands an 80% market share in China’s mobile search market and the company’s mobile revenue exceeded 50% of its total revenues in Q1 of 2015, up from 37% in fiscal 2014. However, since monetization rates on mobile searches are lower than PC searches, Baidu’s profitability could take a hit from consumers’ shift to mobile from traditional PCs; pay-per-click for mobile is about 60% of PC.

Thus, Baidu’s ability to monetize mobile search is key to future profitability. The company’s O2O expansion could tackle this problem. This strategy includes using Baidu’s core business of search as the foundation upon which O2O services would be built.

Similar to Google, Baidu has long dominated search in China, be it searching for products to buy, places to visit or conduct research. However the nature of search, especially mobile search is changing with vertical sites eating a significant chunk of the search business.

Source: China Internet Watch

For instance, in China, users who shop online via a mobile device are more likely to use an app instead of a search engine.

Instead of searching for a service on Baidu, they would search directly using a relevant app. Baidu’s O2O strategy aims to change this.

We need to connect people with services – mobile buying, movie tickets, these sort of high frequency services.

Robin Li, CEO Baidu.

Traditionally search connects people with information, but in the mobile age, search can function as a tool to connect people with services. The O2O services we operate will be a very valuable asset.

Jennifer Li, Chief Financial Officer Baidu, during Baidu’s earnings call.

Thus, if successfully executed, the O2O strategy holds the potential to expand the utility of Baidu’s core search business whereby search, particularly mobile search, is not limited to searching for information but also for searching and using services as well.

As an example, Baidu’s new Baidu Connect solution which allows merchants to create their own public mobile enterprise accounts, was launched in September last year and has already notched 760,000 paying merchants. Since Baidu Connect is integrated into other applications including Baidu Search, Baidu Maps and Nuomi, the solution allows merchants to reach customers through a number of platforms. If a consumer searches for a particular keyword on Baidu using a mobile phone, Connect account links would be displayed as top results, taking into consideration the user’s location and other information to display only the most relevant links. Through this, merchants are able to display advertisements to users near the merchant’s store, thereby boosting merchants’ conversion rates which translates into better monetization rates for Baidu as well.

With the potential to lure customers to searching and using services through Baidu’s search engine instead of directly through mobile apps, Baidu’s O2O strategy is opening an avenue to improve mobile search monetization. The strategy also helps to widen the company’s customer base; so far 99% of offline clients gained through O2O are new customers.

SoundExchange Launches PLAYS Search Engine: Who’s Claiming Your Recordings?

It sounds so simple. You are a record label. You own some recordings, meaning, you have a contract that proves that you own the master rights to some recordings. You send — or have a representative like The Orchard send — your metadata to SoundExchange (SX) to register your sound recordings. You get paid from SX (or from SX via The Orchard) when your recordings get played on Pandora, SiriusXM, iHeartRadio, etc.

What could go wrong with this process? A lot, unfortunately.

  1. SX may be getting garbage reporting (no ISRCs, no UPCs) from the services that report to them, thus making it difficult for them to match reported tracks to their database, which may result in a mis-allocation of the funds received.
  2. SX may receive good reporting from the streaming services, but may not know how to match those royalties to the correct rightsholder. This is because SX’s database is full of multiple instances of THE SAME SONG. That’s right. Was your song on a compilation? Do many versions of the recording exist? All of those recordings were likely registered in SX’s database, increasing the odds that SX will match the ACTUAL recording played to the wrong rightsholder.
  3. Another company has laid claim to your recording.

Registering recordings for thousands of clients at performance and neighboring rights societies around the world, I see #3 occurring the most often and being the primary reason royalties fall into the wrong hands. But at SoundExchange, there is tool that everyone can use and provides a window into who is claiming your recordings: The PLAYS Search Engine.

I encourage anyone and everyone who owns master recordings to check it out. If you do not see your recording in this database, it means SoundExchange has not received any plays associated with your track (sorry). If you see your recording but the “Rights Owner” field is empty, this means that SX has received plays and no one has yet claimed the track. And finally, if you see your recording and another party is listed as the rights owner, you are missing out on performance income that is rightfully yours.

We all know that in the music industry, money often flows into the wrong hands, and SoundExchange is not immune to the data quality issues that are prevalent across this industry. With the PLAYS Search Engine, at least, they have provided a small window for the public to actually see where some of that money is flowing, and a means to correct it if it is wrong. So take a peek, you may be surprised by what you find!


Streaming search engine JustWatch trawls Netflix, iPlayer and more

It just got easier to find your favorite movies and TV shows on-demand, as a search engine dedicated to the major streaming services has launched in the UK.

JustWatch lets users trawl Netflix, BBC iPlayer, Amazon Prime Instant Video, Now TV and other platforms in one fell swoop.

Search Netflix, iPlayer and more with JustWatch

© JustWatch

Search Netflix, iPlayer and more with JustWatch

A timeline helps you keep track of new content on each service, and a range of filters lets users tailor searches based on their favorite providers and genres.

JustWatch also flags up price drops on various platforms to help users get the best deals when purchasing or renting movie and TV content.

It’s available on the web, as well as mobile devices via an official app for Android and iOS. You can even use your smartphone as a remote for navigating the service.

JustWatch arrived in the UK this week off the back of launches in the US, Germany, Brazil, Australia and New Zealand.

Read more:
Follow us: @digitalspy on Twitter | digitalspyuk on Facebook

The hacker Search Engine “Shodan”

Launched in 2009, Shodan spies on computers connected through internet rather than working just as a simple search engine.

Shodan’s creator John Matherly named it after the villainous computer in the video game System Shock.

It is also called Hacker’s Search Engine because it aims to link all the devices connected to the Internet, it took no time to become a play zone for hackers and experimenters.

Shodan works by collecting and stacking HTTP addresses from various devices linked over the Internet across the world. The indexing is done on the basis such as country, OS and brand.

Shodan’s scanning power can be assumed from the fact that it can detect the traffic lights, security cameras, control systems for gas stations, power grids, and even nuclear power plants.

Most of these public services use little measures for online security and once exposed to hackers or terrorist organizations, the results could be disastrous..

There are a number of devices out there that still run on their default passwords or no passwords at all. Shodan crawls through the Internet for such accessible devices and you are shown 50 of those if you have an account on Shodan.

If you could give the website the reason to check these devices with their fees, you would get information of all the devices.

Study shows that internet search engines have the power to swing elections.

As a society, we are happily ensconced in the internet era. And we’re sure that you, oh wonderful blog readers, are among the first to use the internet to find information about candidates come election time. And by and large, we assume the internet search engines we use to find that information are unbiased. But what if they aren’t? Could the order of search results skew our perceptions of possible candidates? Well, this paper explores that very scenario. The result? Let’s just say that we’re happy that Google’s motto is “don’t be evil.”

The search engine manipulation effect (SEME) and its possible impact on the outcomes of elections.

“Internet search rankings have a significant impact on consumer choices, mainly because users trust and choose higher-ranked results more than lower-ranked results. Given the apparent power of search rankings, we asked whether they could be manipulated to alter the preferences of undecided voters in democratic elections. Here we report the results of five relevant double-blind, randomized controlled experiments, using a total of 4,556 undecided voters representing diverse demographic characteristics of the voting populations of the United States and India. The fifth experiment is especially notable in that it was conducted with eligible voters throughout India in the midst of India’s 2014 Lok Sabha elections just before the final votes were cast. The results of these experiments demonstrate that (i) biased search rankings can shift the voting preferences of undecided voters by 20% or more, (ii) the shift can be much higher in some demographic groups, and (iii) search ranking bias can be masked so that people show no awareness of the manipulation. We call this type of influence, which might be applicable to a variety of attitudes and beliefs, the search engine manipulation effect. Given that many elections are won by small margins, our results suggest that a search engine company has the power to influence the results of a substantial number of elections with impunity. The impact of such manipulations would be especially large in countries dominated by a single search engine company.”

Graphiq Search: FindTheBest Becomes Knowledge Graph Engine

The world’s most extensive “knowledge graph” may not be at Google. Vertical search site FindTheBest (FTB) has rebranded and relaunched as Graphiq, a data visualization or knowledge graph search engine.

Using a huge volume of data sources, automation and human editorial oversight, the company says that it has created “the world’s deepest and most interconnected Knowledge Graph, featuring 1,000 collections, 1 billion entities, 120 billion attributes, and 25 billion curated relationships.”

Founder Kevin O’Conner told me that FTB’s experience creating roughly 18 distinct search verticals loaded with structured data was the foundation for the new site. There’s considerable sophistication behind the scenes, enabling the new site to dynamically generate 10 billion data visualizations.

Here are a few examples:


While it’s difficult to determine, Graphiq may offer the widest selection of structured data (and associated visualizations) anywhere online. The data and graphics range from international GDP comparisons to healthcare stats to the historical popularity of US baby names and well beyond.

FTB’s original vertical search sites are not being promoted, but they can still be found in general search results. For example, people will still be able to use and search for houses on the company’s real estate vertical (FindTheHome), look up credit card rates on its financial site (Credio) or do research on colleges and universities (StartClass). Indeed, many of the Graphiq visualizations click through to these underlying vertical comparison sites.

While the FTB vertical engines are consumer-facing, Graphiq is positioned very differently and is directed mainly toward journalists, researchers, publishers and enterprises. However, this is merely one business model expression of the underlying data model.

Publishers can use Graphiq’s charts as content, and journalists can sign up for alerts and research or embed graphics in their stories. In this sense, Graphiq is not that far removed from Nate Silver’s


Graphiq says it has done enterprise integrations with AOL/The Huffington Post, MSN, Hearst and several other large publishers. There are also WordPress plugins and custom integrations.

In these enterprise integrations, Graphiq will generate or recommend data and charts for stories based on an automated analysis of content. For example, the system might suggest a visualization like the above about a story written on Obamacare enrollment. Beyond this, users can simply search or browse the data in a more conventional way.

FindTheBest began as an effort to improve upon Google and provide structured comparison information and “answers not links.” It had terrific data but struggled to generate consumer awareness and create a brand. This shift (or expansion, as the company explains it) offers a more comprehensive enterprise-facing tool that offers immediate and obvious value. But there are many other interesting ways the underlying technology and data could be used.

‘Ethical search engine’ Storm to generate funds for charities

A search engine launched last month with the aim of delivering better, more relvant search results and letting individuals raise money for their favourite charities as they shop online. Storm’s founders describe it as an “ethical search engine”.

Storm will looks similar to other search engines, but  search results will include a ‘Give’ icon displayed alongside listings for participating retailers. Consumers know that when they make a purchase from one of these retailers, Storm will earn a commission, which it will shares with the consumer’s chosen charity. The consumer of course does not pay any extra for the purchase.

Waitrose, Virgin, Sports Direct, Currys, Boots and B&Q are among the “thousands of well-known retailers” who are participating.


Storm search engine results

First charity partner

Storm‘s launch charity partner is WellChild, the charity for sick children.

Colin Dyer, CEO of WellChild, said:

“WellChild is really excited to be working with Storm to give our supporters a quick and easy way to raise much needed money as they shop online – at no extra cost to them. This will make a real difference to thousands of seriously ill children and young people in the UK, helping to ensure they get the best possible care and support wherever they are and whenever they need it.”

Storm will be partnering with other charities and organisations, including Premier League football clubs. These partners will have the option to ‘white label’ a version of the Waterfox browser, featuring their own organisation’s branding and which has Storm as the default search engine.

Waterfox is a very fast web browser, designed by Alex Kontos who is Head of Browser Development at Storm.

Scale of the opportunity

Storm search engine

Currently, according to IMRG Capgemini e-Retail Sales Index (January 2015), almost £1 in every £4 in the UK is spent online, totaling £116 billion in 20142. Storm’s founders claim that the site could generate “up to £25″ from each active user per year for charitable causes.

Unlike other established search engines, the individual user is completely anonymous while using Storm, as the search engine does not harvest data to package up and sell on to third parties.

Other charitable search engines

There are other established search engines that generate income for charities in the UK and internationally, so what makes Storm different?

One selling point is that “the individual user is completely anonymous while using Storm, as the search engine does not harvest data to package up and sell on to third parties”.


Kevin Taylor, MD of Storm


Kevin Taylor, CEO of Storm, explained:

“Consumers have grown suspicious of how the traditional search providers use their personal information and are increasingly aware of the burgeoning profits concentrated in just a few companies. By refusing to harvest customer data, and sharing our revenues with charity partners, we’re building an ethical search engine that offers a real alternative for consumers, whilst also generating millions of pounds for UK charities.”

In addition the search service will be available in 35 languages later this year.

Storm’s team

Storm has been developed with over £2 million of funding from investors and aims to have 10 million regular users within two years.

The startup is backed by investment from several angel investors, including serial entrepreneur Andrew Crossland, of Crossland Technology Investments. The company is headed up by CEO Kevin Taylor, formerly of the security company Symantec. His data protection and information privacy expertise is a core element of Storm.

Adam Green is CTO and the chief designer of the new platform. He helped develop the world’s largest online car hire company, RentalCars, which sold to Priceline Group in 2013, for $135 million. Green, a former Accenture director, helped Airtours, the tour operator that became Mytravel, develop its first online trading platforms at a time when it was the largest online travel booking company.

Storm is available as an Android app with other mobile platforms in development.


876 total views, 196 views today

Google Search Results Could Steal the Presidency

Wired website has the results of a study by two scientists at the American Institute for Behavioral Research and Technology that shows the algorithm used by Google in its search engine could accidentally determine the outcome of a close presidential race.

Specifically, the ranking of negative and positive stories about a particular candidate vastly influences the decision on who to vote for by individual voters.
IMAGINE AN ELECTION—A close one. You’re undecided. So you type the name of one of the candidates into your search engine of choice. (Actually, let’s not be coy here. In most of the world, one search engine dominates; in Europe and North America, it’s Google.) And Google coughs up, in fractions of a second, articles and facts about that candidate. Great! Now you are an informed voter, right? But a study published this week says that the order of those results, the ranking of positive or negative stories on the screen, can have an enormous influence on the way you vote. And if the election is close enough, the effect could be profound enough to change the outcome.

In other words: Google’s ranking algorithm for search results could accidentally steal the presidency. “We estimate, based on win margins in national elections around the world,” says Robert Epstein, a psychologist at the American Institute for Behavioral Research and Technology and one of the study’s authors, “that Google could determine the outcome of upwards of 25 percent of all national elections.”

Epstein’s paper combines a few years’ worth of experiments in which Epstein and his colleague Ronald Robertson gave people access to information about the race for prime minister in Australia in 2010, two years prior, and then let the mock-voters learn about the candidates via a simulated search engine that displayed real articles.

One group saw positive articles about one candidate first; the other saw positive articles about the other candidate. (A control group saw a random assortment.) The result: Whichever side people saw the positive results for, they were more likely to vote for—by more than 48 percent. The team calls that number the “vote manipulation power,” or VMP. The effect held—strengthened, even—when the researchers swapped in a single negative story into the number-four and number-three spots. Apparently it made the results seem even more neutral and therefore more trustworthy.

Google’s algorithm is proprietary, so forget about anyone seeing it to determine the cause of this effect. But it would be interesting to see if one party or the other was usually or always negatively impacted by the ranking of search results.

The rankings of positive and negative stories is a by-product of the algorithm — not the intent of Google managers. But could Google — or a campaign — actually game the system to manipulate a desired result?

What they call the “search engine manipulation effect,” though, works on undecided voters, swing voters. It’s a method of persuasion.

Again, though, it doesn’t require a conspiracy. It’s possible that, as Epstein says, “if executives at Google had decided to study the things we’re studying, they could easily have been flipping elections to their liking with no one having any idea.” But simultaneously more likely and more science-fiction-y is the possibility that this—oh, let’s call it “googlemandering,” why don’t we?—is happening without any human intervention at all. “These numbers are so large that Google executives are irrelevant to the issue,” Epstein says. “If Google’s search algorithm, just through what they call ‘organic processes,’ ends up favoring one candidate over another, that’s enough. In a country like India, that could send millions of votes to one candidate.”

Conservatives have been claiming for years that Google has an anti-conservative bias. But in recent years, Google has been contributing to conservative organizations like Heritage Foundation and the Federalist Society. Of course, that doesn’t mean much, but it raises questions as to whether Google’s bias against conservatives and conservative issues translates into a deliberate effort to create an algorithm that would penalize the right when it comes to elections.

I don’t even know if that’s possible. People use Google to search for everything from baby clothes to candidate’s positions on issues. Could they actually write a program that would always rank negative stories about conservative candidates first?

There’s no doubt Google, the company, has a liberal bias. But whether they could — or would — consciously use their search engine to advance their agenda can’t be proved and would seem to be impossible.

Internet search engines may be influencing elections

“What we’re talking about here is a means of mind control on a massive scale that there is no precedent for in human history.” That may sound hyperbolic, but Robert Epstein says it’s not an exaggeration. Epstein, a research psychologist at the American Institute for Behavioral Research in Vista, California, has found that the higher a politician ranks on a page of Internet search results, the more likely you are to vote for them.

“I have a lot of faith in the methods they’ve used, and I think it’s a very rigorously conducted study,” says Nicholas Diakopoulos, a computer scientist at the University of Maryland, College Park, who was not involved in the research. “I don’t think that they’ve overstated their claims.”

In their first experiment, Epstein and colleagues recruited three groups of 102 volunteers in San Diego, California, who were generally representative of the U.S. voting population in terms of age, race, political affiliation, and other traits. The researchers wanted to know if they could influence who the Californians would have voted for in the 2010 election … for prime minister of Australia.

So they built a fake search engine called Kadoodle that returned a list of 30 websites for the finalist candidates, 15 for Tony Abbott and 15 for Julia Gillard. Most of the Californians knew little about either candidate before the test began, so the experiment was their only real exposure to Australian politics. What they didn’t know was that the search engine had been rigged to display the results in an order biased toward one candidate or the other. For example, in the most extreme scenario, a subject would see 15 webpages with information about Gillard’s platform and objectives followed by 15 similar results for Abbott.

As predicted, subjects spent far more time reading Web pages near the top of the list. But what surprised researchers was the difference those rankings made: Biased search results increased the number of undecided voters choosing the favored candidate by 48% compared with a control group that saw an equal mix of both candidates throughout the list. Very few subjects noticed they were being manipulated, but those who did were actuallymore likely to vote in line with the biased results. “We expect the search engine to be making wise choices,” Epstein says. “What they’re saying is, ‘Well yes, I see the bias and that’s telling me … the search engine is doing its job.’”

In a second experiment, the scientists repeated the first test on 2100 participants recruited online through Amazon’s labor crowdsourcing site Mechanical Turk. The subjects were also chosen to be representative of the U.S. voting population. The large sample size—and additional details provided by users—allowed the researchers to pinpoint which demographics were most vulnerable to search engine manipulation: Divorcees, Republicans, and subjects who reported low familiarity with the candidates were among the easiest groups to influence, whereas participants who were better informed, married, or reported an annual household income between $40,000 and $50,000 were harder to sway. Moderate Republicans were the most susceptible of any group: The manipulated search results increased the number of undecided voters who said they would choose the favored candidate by 80%.

“In a two-person race, a candidate can only count on getting half of the uncommitted votes, which is worthless. With the help of biased search rankings, a candidate might be able to get 90% of the uncommitted votes [in select demographics],” Epstein explains.

In a third experiment, the team tested its hypothesis in a real, ongoing election: the 2014 general election in India. After recruiting a sample of 2150 undecided Indian voters, the researchers repeated the original experiment, replacing the Australian candidates with the three Indian politicians who were actually running at the time. The results of the real world trial were slightly less dramatic—an outcome that researchers attribute to voters’ higher familiarity with the candidates. But merely changing which candidate appeared higher in the results still increased the number of undecided Indian voters who would vote for that candidate by 12% or more compared with controls. And once again, awareness of the manipulation enhanced the effect.

A few percentage points here and there may seem meager, but the authors point out that elections are often won by margins smaller than 1%. If 80% of eligible voters have Internet access and 10% of them are undecided, the search engine effect could convince an additional 25% of those undecided to vote for a target candidate, the team reports online this week in the Proceedings of the National Academy of Sciences. That type of swing would determine the election outcome, as long as the expected win margin was 2% or less. “This is a huge effect,” Epstein says. “It’s so big that it’s quite dangerous.”

But perhaps the most concerning aspect of the findings is that a search engine doesn’t even have to intentionally manipulate the order of results for this effect to manifest. Organic search algorithms already in place naturally put one candidate’s name higher on the list than others. This is based on factors like “relevance” and “credibility” (terms that are closely guarded by developers at Google and other major search engines). So the public is already being influenced by the search engine manipulation effect, Epstein says. “Without any intervention by anyone working at Google, it means that Google’s algorithm has been determining the outcome of close elections around the world.”

Presumably Google isn’t intentionally tweaking its algorithms to favor certain presidential candidates, but Epstein says it would extremely difficult to tell if it were. He also points out that the Internet mogul will benefit more from certain election outcomes than others.

And according to Epstein, Google is very aware both of the power it wields, as well as the research his team is doing: When the team recruited volunteers from the Internet in the second experiment, two of the IP addresses came from Google’s head office, he says.

“It’s easy to point the finger at the algorithm because it’s this supposedly inert thing, but there are a lot of people behind the algorithm,” Diakopoulos says. “I think that it does pose a threat to the legitimacy of the democracy that we have. We desperately need to have a public conversation about the role of these systems in the democratic processes.” Search Engine was started in November of 2014. It was started because we noticed the lack of features on traditional search engines. Every search engine out there has the same old 10 links on the page with no extra features for those links. GeniusSearch prides itself on offering way more than the competition does. GeniusSearch has features like quick look, email link, comment on links, share link and the ability to pin the images to pintrest in image search. More features are coming. In developement right now is a review system for each website so that users can leave reviews for each site.


Quicklook – quicklook offers users a way to take a sneak peak at websites without leaving the search engine. You can open multiple websites at once to take a look and compare the sites, whether you are comparing prices or just looking for the best pictures or videos. You can use the quick look feature for anything.

Email link – Under each web result there is a way to email the link to anyone you want. If you want to share results with friends you can quickly email the link to them with our email link button.

Comments – Under each web result there is a way to leave comments on the website (domain or url). You can easily leave any comment you want users to see on the search engine. This is a great way to discuss news articles or leave feedback about the websites you visit.

Share – Under each result is a share button so that you can share the link with anyone on the website you choose. There are currently 300 websites that integrate with the share button including facebook, twitter, tumblr and google+. Just hit the button and you’ll get to share the link with who ever you want.

Pin it – A pin it button is available for all image searches so if you would like to pin the images you are looking at its only 1 click away. You can pin any of the images you find on to pintrest.

In development – Review system – A review system is in development for When its done users will be able to leave their comments about the website along with a rating out of 5 stars so that other users can see it directly on search results when searching for something.

The Archie Search Engine – The World’s First Search!

Over the years, we’ve loved our popular search engines — AOL, Yahoo in 1994, and Google in post-1998. However, while all of these search engines have existed for quite a while now, none of them have been the first to exist. Instead, the first known search engine has made its place in history with the name of Archie.

Written over two decades ago and with no updates since then, Archie provided a very different search experience than we’re used to today. So how is it different, and could it still be useful today? I’ll take you on a tour of the Archie search engine and give you a perspective of how things have changed over the past 23 years.

About Archie

archie search engine
Archie, which is somehow short for “archive” because Archie followed the Unix naming standards, was written in 1990 by Alan Emtage, who was studying at McGill University in Montreal at the time. While the World Wide Web didn’t exist yet at this time, there was a much smaller network in place that hosted a number of different files. The Archie search engine was a simple search engine that would keep an index of the file lists of all public FTP servers it could find. This way, users would be able to find publicly available files and download them. This provided a much better way to find files, as previously people could only know about files by simple word of mouth.

First Impressions

archie search engine
One of the places which still hosts an Archie search engine is the University of Warsaw. With this page, you can search for public files available via FTP as well as regular Polish web pages. As Linux files are commonly available via FTP servers, the first thing I searched for was “linux”, which returned 100 results of various Linux-related files at a time as that is the default setting. There is a “More Results” button you can click on to return the next 100 results. However, after looking at the list, I quickly discovered that most files found dated to 2001 so I assume that this particular Archie search engine hasn’t fully updated its index of files since then.

Customizing Your Search

archie search engine
Despite the Archie search engine being very primitive, the search page still offers a number of different features to customize your search experience. For example, besides being able to choose between “Anonymous FTP” and “Polish Web Index”, you can also choose whether your search entry should be treated as:

  • a sub string (as long as a part of the filename includes what you searched)
  • an exact search (anything that doesn’t match the query exactly is rejected), and
  • a regular expression

You can also choose whether the case is sensitive or insensitive. Another option that’s available is the ability to search for strings rather than paths to files or websites. In other words, if this option is enabled, it returns the filenames of what Archie finds, but not the actual place where the file was found so that you can download it. I’m not entirely sure why this feature would be very useful, but I’m sure it was added in for a reason. There are even three options for how the search results should be outputted, including keywords only, excerpts only, and links only.

Advanced Options

archie web search
There are a number of optional search parameters that can help you be more specific with your needs as there are many files on the Internet. These optional parameters include the abilities to:

Ads by Google
  • change how Archie treats spaces in your search query from “OR” to “AND”
  • limit the search results to match a directory path
  • exclusive search results that match a directory path you don’t want
  • limit the search to certain domains such as .com, .edu, .org, etc.
  • set the maximum search results at one time
  • set maximum hits per string matched (if using a non-exact search)
  • set maximum characters of string matched (if using a non-exact search)


Overall, I feel that an Archie search engine, despite being primitive compared to today’s standards, was still a functional way to accomplish searches. I am surprised, however, about how specific you can still be with it, which can help a lot when looking for a specific file. I still prefer today’s search tools a lot more because it only takes a few keywords to find what I’m looking for without having to fill in a bunch of optional search parameters. Those improvements can be contributed to’s ability to use natural language in its searches, and Google’s algorithmic advances. It’s interesting to see how search engines have progressed from Archie to Google, and it makes me excited to see how searches advance even more in the future!

Apple is hiring for its own search engine: Apple Search

apple search
While Google started out as a search engine before expanding into the tech giant it is today, it looks like differently thinking rival megacorporation Apple might take the opposite approach. A recent job listings suggests Apple is now developing a search engine of its own: Apple Search.

“Apple seeks a technical, driven and creative program manager to manage backend operations projects for a search platform supporting hundreds of millions of users,” reads the job descriptionfor a Apple Search Engineering Project Manager. “Play a part in revolutionizing how people use their computers and mobile devices.” As far as confirmations of new Apple services go, it doesn’t get more straightforward than that.

search engineSo the question isn’t “Is Apple making a search engine?” but rather what will Apple Search be? Presumably it will be the default search engine for future Apple products. After all, that’s what Google and Microsoft do and compared to them Apple is even more preoccupied with keeping users in a closed ecosystem. Google still makes most of its money through search and if it was willing to pay $1 billion to remain the default iOS search engine last year, that must be a pretty valuable space to be in. Plus, with Google’s search contract on Safari browsers expiring this year, Apple’s timing couldn’t be coincidental.

Services like Siri and Spotlight search on OS X Yosemite are already getting users accustomed to Apple-curated searches while also gathering the precious data that gives search engines their value. Meanwhile, recent hires like search expert William Stasior formerly of Amazon and AltaVista, along with discoveries like web-crawling bots on Apple servers, show that lots of behind-the-scenes work has already taken place.

But until a more official statement from Apple arrives, all we can do is guess what Apple Search will be and when it will be released. For all we know it might not even resemble a traditional search engine given how disruptive Apple likes to be. But if you really can’t wait find out, if you’re qualified, why not apply for the job?

Of course, we all remember the last time Apple tried to move in on a space Google had pretty much perfected. So let’s all just hope that whatever Apple Search is, it’s better than the Apple Maps disaster. It should probably be better than Bing too, but that’s not hard.

Google Could Be Threatened by Losing Search Deals

Google’s Excellent 2Q15 Earnings Have Re-Rated Its Stock

(Continued from Prior Part)

Google remains the dominant player in the search ad market

The search advertising business remains Google’s (GOOG) most valuable business. Google dominates the US desktop search ad market with a share of 64%, according to a June 2015 report from comScore. The chart below shows Microsoft (MSFT) ranked as the distant second player with a share of 20% and Yahoo (YHOO) the third-ranked player with a share of 13%.

Google has lost too many search deals to competitors lately

Some risks have increased that can disrupt the Google’s dominance in this market. The company has lost too many search deals to competitors. Recently, Microsoft reported that it will hand over the display advertising operations to AOL (AOL). In exchange, AOL will replace Google’s search engine with Microsoft’s Bing for the next ten years.

This isn’t the first time this kind of thing has happened with Google. In November last year, Mozilla announced that it will replace Google with Yahoo as the default search engine on its Firefox browser in the United States. This deal did affect Google, and we covered this in  Mozilla’s Deal with Yahoo Impacted Google . In 2013, Apple (AAPL) also replaced Google with Microsoft’s Bing on Siri with the launch of iOS7 in 2013.

Although Google doesn’t seem to have been affected by these incidents in the short term, the search giant could definitely be affected in the long term once users start using the default search engines more often. For diversified exposure to Google, you can consider investing in the PowerShares QQQ Trust, Series 1 (QQQ). QQQ invests 3.5% of its holdings in Google.

Hulbee Launches Only Privacy-First Search Engine Secure From NSA, EU And Data Miners

Delivers the power of semantic search to consumers in the U.S., While offering a safer alternative that doesn’t collect user data

EGNACH, Switzerland, Aug. 5, 2015 /PRNewswire/ — Hulbee, a Swiss technology company with more than 15 years experience delivering enterprise-grade search and data analytics to leading European corporations, today launched the most secure, privacy-first search engine. Leveraging semantic search algorithms, Hulbee provides users with a means to find what they’re looking for, even when they’re unsure of the precise phrase or term to input.

Hulbee maintains unprecedented levels of security not offered by any other privacy-based search company because it has its own data center in Switzerland, so all of its information is safely stored away from the National Security Agency in the U.S. and the European Union. The potential for breaches in security of the transmission of data via the cloud or from one site to another are also eliminated by Hulbee’s infrastructure platform.

“As we continue to see examples of data privacy intrusion on both a personal and corporate level, we realized our greatest strengths in the enterprise search arena could be highly valued by individuals and families,” said Hulbee CEO Andreas Wiebe. “In response to this demand, we developed Hulbee, a search engine that now gives users in the U.S. and abroad not only a safer alternative, but one built on 17 different algorithms developed during our 15 years in the enterprise sector, for more helpful, meaningful and ultimately safe results.”

Unlike other search engines, which deliver results based of what’s most often searched, Hulbee is based on semantic search, which focuses on the meaning of the word and on various themes associated with it. After sifting through data in 33 languages, the Hulbee search engine presents a collection of words presented in colorful thematic tiles on the screen, giving users a broad range of choices rather than just the most popular selections. For example, if a user searches for the word “bass,” tiles that refer to the fish, the musical instrument, the shoe manufacturer and other related terms surface, ensuring that even when a person doesn’t know the exact term to input, they can count on a word cloud of options that increases the chances of finding what they are looking for.

The Hulbee search experience includes:

  • No cookies or collection of private data
  • A clean user interface
  • Search results, supplemented by a word cloud of related themes or content, that more quickly deliver to users what they’re looking for.
  • The ability to categorize search results by web, images, video, and music content

Hulbee joins a crowded field as something markedly different: a technology company, rather than a software outfit, focused on information processing and analysis – not just simple search. Hulbee uses text analysis and document management technology developed over a decade and a half with private companies in Europe.

About Hulbee:
Hulbee AG is a software technology company based in Switzerland with a 15-year track record in eCommerce, SaaS, cloud computing and business intelligence. Hulbee’s products, which have long analyzed, archived and discovered information, added a safe search engine for consumers in the U.S. in 2015. Founded in 2008 by Andreas Wiebe, Hulbee is headquartered in Egnach, Switzerland. For more information, visit or follow on Twitter at @HulbeeUS.

An Updated Search Engine Helps Drivers Find The Best Car Insurance Quotes!

LOS ANGELES, Aug. 5, 2015 /PRNewswire-iReach/ — has been updated with a new and improved search engine that finds the best car insurance quotes for each client.


A new and updated search engine helps clients look for the best car insurance quotes in their area.Clients can use this system for free by visiting The website is constantly updated with new car insurance quotes and uses a professional search engine to select the best coverage plans for each client.

In order to search for car insurance, clients will have to complete a single online form. This will help the search engine make a better selection between different policies and choose only the quotes that are relevant for every visitor. Clients also have the options to select the desired type of insurance and coverage amount.

How to search for car insurance quotes

In order to get car insurance quotes, clients have to complete an online questionnaire with some simple information:

  • The car’s model, technical condition and fabrication year. These details about the vehicle that needs coverage are important for determining insurance prices and will help brokers choose relevant quotes for each visitor.
  • The driving record. The driving record also plays an essential role in calculating car insurance premiums. Someone who has a clean driving record, without traffic violations will get better car insurance rates.
  • Information about the policy. Clients are able to search for a more specific coverage plan by selecting the coverage amount and the type of car insurance that they want.

The search engine has been updated and some functionality issues have been resolved. The process is now smoother, allowing clients to get multiple car insurance quotes faster.

“Shopping for car insurance is now easier online. We offer the best car insurance quotes from any area and we help any driver get cheaper coverage premiums.”  said Russell Rabichev, Marketing Director of Internet Marketing Company. is an online provider of life, home, health, and auto insurance quotes. This website is unique because it does not simply stick to one kind of insurance provider, but brings the clients the best deals from many different online insurance carriers. In this way, clients have access to  offers from multiple carriers all in one place: this website. On this site, customers have access to quotes for insurance plans from various agencies, such as local or nationwide agencies, brand names insurance companies, etc.

For more information, please visit

Microsoft giving Seahawks fans free tickets, memorabilia for using Bing search engine

bingseahawks1Microsoft is giving Seahawks fans a reason to use its search engine over competitors.

Bing teamed up with Seattle’s professional football team to launch acustom version of its Rewards program for those interested in winning Seahawks tickets, autographed memorabilia, a road trip to an away game, and more.

The program basically rewards users for conducting searches on Bing. The more you use Bing, the more rewards points you can collect to exchange for a chance to win prizes like these:

  • Win 2016 Seahawks Season Tickets
  • Hold your Fantasy Football Draft in a CenturyLink suite
  • Win a road trip to the Seahawks vs 49ers game
  • Player autographed gear
  • Weekly drawings for Seahawks tickets
  • Signed Seahawks Training Camp jerseys
  • Behind the scenes tours of the VMAC and CenturyLink
  • An Insider Draft Preview with Seahawks coaching staff
  • Go golfing with the Seahawks
  • Attend a post-game press conference

You can start redeeming credits on August 17. More information on the new deal is here, and you can access your Bing Rewards dashboard here.

Bing and the Seahawks have had a long-standing partnership over the past several years. The search engine has given away free tickets for people that use Bing in the past and the Bing logo is often seen around CenturyLink Field and the team website. This month’s training camp is also “presented by Bing.”

Microsoft is expecting search engine market share gains from Bing with the release of Windows 10, which has heavy Bing integration throughout the new operating system. The company’s search revenue rose 21 percent to $922 million in its most recent earnings report, with Microsoft crediting increased Bing search market share, and higher revenue per search.

North Korea is operating a science search engine, report says

The North Korean portal is accessible by smartphone and is used to improve farming practices.
By Elizabeth Shim   |   Aug. 6, 2015 at 12:25 PM

North Korea has built an online portal for the country’s scientific community that allows users to access the data of major research institutions in the country. Photo by Katharine Welles/Shutterstock

SEOUL, Aug. 6 (UPI) — North Korea has built an online portal for the country’s scientific community, and the search engine can be accessed from smartphones, according to a report.

The pro-Pyongyang outlet Choson Sinbo published a story on the portal Wednesday, but the site has been in operation since November 2013 and smartphone service began in November 2014, South Korean outlet CBS No Cut News reported.

The comprehensive site “Yolpung,” meaning “fever” in Korean has gathered the database of each major scientific research institute in North Korea. Participants include Kim Il SungUniversity, the North Korean Confederation of Science and Technology’s Central Committee, North Korea’s Education Committee, the Grand People’s Study House – also known as the central library in Pyongyang – and various agricultural agencies.

According to the Choson Sinbo, the portal is “widely used in the practices of collective farms…according to sources, know-how on crop cultivation methods adjusted to agricultural, technological resources and recent weather patterns is available.”

In addition to text documents, video and multimedia also are online.

The slogans “Pyongyang Mind” and “Pyongyang Speed” are enjoying resurgence under North Korean leader Kim Jong Un. Construction in the city of Pyongyang is booming as a wave of new economic activity has taken hold across all sectors, including technology.

Visual search engine connects retailers with eager shopaholics

Mobility Sales and Marketing

Jackie Atkins

Jackie Atkins

Published: August 5th, 2015

Toronto-based Craves is allowing retailers to uniquely harness the power of online shopping with its new visual search engine. The fashion discovery app is hoping to entice retailers and shoppers alike to get on board with their new technology.

Launched in July, the app will generate matching or similar fashion items to an image uploaded by users. Once these results are produced, users have the option to buy the product directly from retailers.

Retailers can partner with Craves by becoming a part of their catalog. This means that their products become visible in search results and are available for direct purchase through the app.

“Craves is known to feature merchants and retailers that offer quality products from trusted retailers that offer great customer experiences,” said co-founder Scott Cormier. “Even if our users don’t recognize the name of the retailer, they’ll know they can trust that they’re in good hands by the nature of the partnership alone.”

Additionally, Cormier explains that partnering with the app gives retailers access to a growing, fashion-minded user base actively looking to purchase items they’re interested in.

Online shopping is a booming industry, and visual search engines like Craves are hoping to simplify the process. There is a substantial market for these apps, which can be attributed to the growing desire for consumer convenience.

“I think apps that address the end to end shopping experience in the most frictionless way will have an edge,” explains Cormier.

In the future, Craves is looking to expand past fashion and scale up the company to implement their technology into other uses.

Why Search Engines Are Capable of Deciding Elections

Republican presidential candidate Donald Trump speaks at the Family Leadership Summit in Ames, Iowa, on July 18. Media coverage of Trump underscores how important Internet relevancy is to political campaigns. A new study finds that search engine rankings, a measure of the amount of online interest in a subject, are capable of swaying undecided voters. JIM YOUNG/REUTERS
FILED UNDER: Tech & Science, Google, search engine, Studies

Sarah Palin might think that the news media are the kingmaker in American politics, but new research suggests that a different entity might be what is responsible for drastically influencing election results: search engine companies.

A study published by Robert Epstein and Ronald E. Robertson under the auspices of the American Institute for Behavioral Research and Technology found that the so-called Search Engine Manipulation Effect can sway elections. The theory behind SEME is that voters’ preferences change substantially based on how high candidates’ names, campaign news stories or articles about political platforms appear in search engine rankings. By making certain results or candidates appear at the top of the list, a search engine company could conceivably wield an enormous influence on the public narrative of an election cycle.

Using data gathered from elections in California and India, Epstein and Robertson found that “biased search rankings can shift the voting preferences of undecided voters by 20 percent or more.” They also found that the shifts were affected by such factors as Internet access and demographics.

Try Newsweek for only $1.25 per week

The study’s conclusion posits a circular “bandwagon effect.” Search rankings—how high something appears on the list of search results—affect undecided voters’ preferences by giving more exposure to certain candidates or platforms. Conversely, voter preferences also affect the rankings themselves as more popular subjects with more citations and clicks become more likely to move up the list, making them more relevant. Even minor changes in search engine rankings can lead to large changes in real-world momentum.

The study argues that “search rankings are controlled in most countries today by a single company,” suggesting that SEME is potentially far more impactful than something like a biased cable news station, which would likely be balanced by other news outlets. The phrase “threat to democracy” crops up multiple times in the study.

Epstein, the study’s principal author, has been publishing research on the idea of SEME for several years now. Back in 2013, a paper that he presented at the Association for Psychological Science became the subject of a lengthyarticle, published in The Nation, detailing Epstein’s prior personal friction with Google. The new study, it should be noted, suggests what search engine companies might do but does not necessarily argue that they already do it. The Nationquoted Yale computer science professor Michael Fischer as saying, “To the extent that somebody wants to build a politically biased search engine, they are certainly capable of doing that.”

So far, there is little evidence that companies “want” to build such a search engine. Google relies on a computational algorithm that produces search results. The public can read an explanation of early Google software, penned by Sergey Brin and Lawrence Page when they were undergraduates at Stanford, here.

Brin and Page argued two key points in this early paper. First, that “PageRank” was designed to limit third-party manipulation of search results, and second, that the algorithm assumes the position of a “bored surfer”—i.e., those more likely to be interested in pages with a large number of citations. The explanation appears under the heading “Bringing Order to the Internet.”

Though Google didn’t exactly plan it this way, the primacy of rankings on search engine lists has become an important competitive advantage in such fields as politics or business. After all, when it comes to public discourse presented by the media and the Internet, perception is often reality. If Donald Trump appears at the top of the list when someone searches for “GOP candidates,” he is more likely to be perceived as a viable political figure. Epstein calls this phenomenon “order effects.” Items placed first on a list, such as the highest results in a Google search, are more likely to be noticed and remembered than the 10th, 100th or thousandth result.

Many businesses and politicians already know this, which has led to the evolution of an entire industry designed to give companies a competitive edge in rankings. Search engine optimization providers use a variety of methods to make sure that their clients’ webpages “pop up” more often on engines.

Google has consistently denied that it would ever manipulate search results. In a statement emailed to Newsweek, a Google representative, after suggesting that further research be conducted, wrote that Google’s overall position on the issue “remains” what it was two years ago, when the company provided the same statement to The Nation: “Providing relevant answers has been the cornerstone of Google’s approach to search from the very beginning. It would undermine people’s trust in our results and company if we were to change course.”

Asked about the company’s ability to potentially sway the public on issues like the stock market, Google Executive Chairman Eric Schmidt famously quipped: “There are many, many things that Google could do that we chose not to do.”

A search engine competing with Google by keeping users anonymous just tripled its growth

One of the few companies competing with Google in the web search industry is DuckDuckGo, which tries to stand out by pledging not to track its users. Specifically, DuckDuckGo doesn’t share its users’ search queries with other sites, doesn’t store their search histories, and doesn’t store any of their computer or location information—things many other search engines do to boost their advertising efforts.
While DuckDuckGo is still a fraction of Google’s size, it just wrapped up its second consecutive year of impressive growth. (One of the company’s transparency measures is to publish aggregate daily usage statistics.)

At the end of 2014, DuckDuckGo was receiving about 7 million direct search queries per day, roughly double the amount it received a year prior. But if you look more closely, DuckDuckGo had a huge end of the year, after more modest growth in the beginning.

Why? Two big distribution deals seem to have played key roles. In September, Apple started including DuckDuckGo as a search option in its Safari browser for iOS and OS X. (Google is still the default option.) Then in November, Mozilla added DuckDuckGo as a search option for its Firefox browser.
“DuckDuckGo has significantly increased its user base from both integrations,” CEO Gabriel Weinberg, who launched the company in 2008, tells Quartz. “Though the exact amount is unclear since we don’t track people.”
DuckDuckGo search query chart annotated
A look at the company’s public stats suggests both moves contributed to an acceleration of growth. Over the first eight months of 2014, DuckDuckGo’s average month-over-month query growth was 3.5%. Over the last four months of the year, it was an average 10.2%—roughly triple.
The company’s steepest growth driver to date, however, still seems to be the mid-2013 revelations of mass government internet surveillance. Google Chrome, for what it’s worth, still doesn’t include DuckDuckGo as a pre-installed search option.

Yandex Pushes Privacy With Beta Release Of Its Minimalist Browser

Russia’s Yandex has added a suite of new privacy centric features to an experimental browser it unveiled last year, as well as now switching the software from alpha to beta — and making it its default browser for international users.

The software giant took the wraps off the minimalist browser last November, dubbing it a “concept” and offering it alongside its extant Yandex.Browser. It’s now saying the concept will be the mainstream version it offers in international markets — although existing users of the older browser can carry on as is.

However in its largest market Russia, and select other regional markets, it will continue to maintain the concept browser as an experimental alternative — and its older browser remains in place as the primary offering there.

A Yandex spokesman said this is because in markets where it provides its core services (namely search and recommendations) it needs to know more about users, so switching to a private version completely would erode its quality of service. But, in international markets, it’s freer to experiment with offering private browsing as a lure to drive usage (and better compete with data-harvesting behemoth Google).

All Yandex’s browsers have a combined user-base of 22 million per month across all markets, according to the spokesman.

Additional privacy features were one of the most requested pieces of feedback on the alpha release, according to Yandex — especially from users in Germany, Canada and the U.S.

New privacy-focused functionality in the beta version of the minimalist browser includes not gathering user data, stats or browsing data by default, and offering a Stealth Mode extension in the browser to make it possible for the user to instantly switch on a block for analytics cookies and social network plugins.

The browser also prompts users to choose their own default search engine on launch — offering a choice of three, which varies depending on the user’s location.

More details about the new release can be found on the Yandex blog. AdGuard, which developed the Stealth Mode extension for the browser, has also published its source code on Github.

The beta version of the browser is available for Windows and OS X, in 15 languages, and can be downloaded at

Can new search engine ‘SciNet’ outsmart Google?

LONDON: Researchers claim to have developed a new search engine that outperforms current ones, and helps people to do searches more efficiently.

The SciNet search engine, developed by researchers at the Helsinki Institute for Information Technology HIIT, is different because it changes internet searches into recognition tasks, by showing keywords related to the user’s search in topic radar.

People using SciNet can get relevant and diverse search results faster, especially when they do not know exactly what they are looking for or how to formulate a query to find it.

Once initially queried, SciNet displays a range of keywords and topics in a topic radar. With the help of the directions on the radar, the engine displays how these topics are related to each other.

The relevance of each keyword is displayed as its distance from the centre point of the radar – those more closely related are nearer to the centre, and those less relevant are farther away.

The search engine also offers alternatives that are connected with the topic, but which the user might not have thought of querying. By moving words around the topic radar, users specify what information is most useful for them.

When people are uncertain about a topic, they are typically reluctant to reformulate the original query, even if they need to in order to find the right information, researchers said.

With the help of a keyword cloud, people can more quickly infer which of the search options they receive is more significant for them because they do not need to visit the pages offered by the search engine to find new search words and start again.

It’s easier for people to recognise what information they want from the options offered by the SciNet search engine than it is to type it themselves, according to the project’s coordinator, Tuukka Ruotsalo.

Researchers have founded a company Etsimo Ltd to commercialise the search engine.

A Comprehensive List of Search Engines

When people think of search engines, the first name that comes to mind is often Google. It’s one of the most enduring brand names, and it has even worked its way into mainstream vernacular, and today many people substitute the phrase “searched online” for “Googled”. According to comScore, Inc., Google and its affiliated websites comprise 67.6% of the search engine market share in the United States, and, according to Netmarketshare 66.44% worldwide.

Though prominent, Google is not the only search engine available. There are innumerable others that provide various interfaces, search algorithms, and other unique features. Many even base their search algorithms around specific philosophies, ones that often attract brand-new audiences.

In descending order, the remaining most popular search engine companies in the United States, by market share after Google, are Microsoft (18.7%), Yahoo (10.0%), Ask Network (2.4%), and AOL (1.3%), according to ComScore report.

Likewise, according to December 2014 data, the remaining most popular search engines worldwide by market share are Baidu (11.15%), Bing (10.29%), Yahoo! (9.31%), and AOL (0.53%).

The exact data is highly variable based on who’s reporting it, and it varies even further on a month-to-month basis. But generally speaking, the ranking order does not vary much.

This list does not necessarily include the 12 most used or well-known search engines after Google; instead, it includes search engines that differ from one another in terms of history, philosophy, content, targeted audiences, and other variables. With that in mind, lets take a look at 12 of the most underrated search engines.


Based on comScore’s data, the next most powerful player in the search engine industry is Microsoft and its search engine, Bing.

Key differences between the two engines, according to the New York Times, lie in backdrop, search tools, and the amount of information offered on the immediate search page. Bing sports striking, engaging home pages, a display tool when searching for airline flights, aggregate restaurant rating badges, and more. One popular feature is its “linkfromdomain:” search term. This term allows users to see the most frequently used outgoing link from a given site. This can provide easy access to research pages or recommended sites from a trusted source.

Another operator, contains:FILETYPE, allows users to search by file type. Researchers and students with specific softwares may search specifically for PDFs, Word documents, Excel spreadsheets, different photo types, and more universal file types on a whim. This helps to rule out unnecessary documents.

bing filetype operator

Bing’s clean interface particularly excels when searching for videos. The video searches don’t integrate well with text searches on Google. On Bing, the listed videos fit neatly side-by-side in an interface that best accommodates them. This helps to cut down on the amount of time a user would spend scrolling.

Bing hasn’t been shy in comparing itself to Google, either. It has even launched a website titled “Bing It On” which directly compares its search results to those of Google.


Another powerful competitor in the search engine market is the long-enduring Yahoo. For many, Yahoo is much more than a search engine; it’s an online Swiss Army knife.

In addition to its search engine, the Yahoo Web portal offers easy access to its news aggregator, games center, retail options, travel guide, horoscope, and other varied features. Yahoo Finance is a popular aggregate for some of the best financial news available, combining information from CNN Money, The Street, and more.

Another extraordinarily well-used feature of Yahoo is Yahoo Answers, which is a forum that allows people to phrase questions in ways the traditional search engines have difficulty handling. Other users can view questions and use their background knowledge and tailor their answers in a personalized manner.

Other popular aspects of Yahoo include easy photo sharing (facilitated by Yahoo’s purchase of Flickr), local news through Yahoo Local, and myriad entertainment options. By having all these convenient features in one place, users rarely have to venture elsewhere if they don’t want to.


Founded in Russia in 1997, Yandex has quickly risen to become the country’s premier search engine. Since 2010, it has gone worldwide and become a popular resource for those looking for easy-to-use search pages between different languages. Its translation and cross-lingual search options are featured prominently on its homepage, and it accommodates English, Russian, German, French, and smaller Eastern European languages. This allows bilingual searchers or students working on language projects to more easily find whatever it is they’re looking for.

yandex search engine


The search engine formerly known as “Ask Jeeves” was easily one of Google’s greatest competitors during the early days of the World Wide Web. Though not the hot commodity it once was, it remains popular for its accommodation of natural, colloquial language. After a user poses a question, it provides possible answers and a large list of other pertinent questions.

Ask’s historic accommodation of vernacular has, in essence, found a spiritual successor through voice commands and searches on mobile devices. Thanks to Apple’s Siri (which relies on Bing) and the Google app, there’s less stigma over voice commands, and they’re becoming more popular. With Siri, users are directly able to bypass using their other apps or search engines by just asking their phone a question.

Though Ask may have popularized the use of dialectal searches, it unfortunately is not as well-integrated with the programs that now champion them.


For those unsure of which search engine to use, many default to Dogpile — the engine that aggregates from pretty much everyone else.

Like Ask, Dogpile is another site with early online history and considerable brand loyalty. Search results (from Google, Yahoo, Yandex, and more) are set upon a focused interface of white and varying shades of blue. Many prefer Dogpile for its chic design, comprehensive answers, and a template that doesn’t prove too distracting or cluttered.

dogpile search engine

Its listed features include: Category Links, Yellow Pages, White Pages, Statistics Bar, Search Finder, Preferences, Spelling Correction, About Results, and Favorite Fetches. A user’s Dogpile experience is easily personalized to a user’s liking.


Many Internet users are unfamiliar with the Deep Web. According to CNN, the Deep Web encompasses everything traditional search engines having trouble finding. Pages in the Deep Web may be relatively unconnected to other parts of the Internet or housed on private networks.

yippy search engine

Search engine Yippy (formerly Clusty) searches the Web using other search engines, but it provides results in the form of “clouds” instead of traditional search methods. This makes it more likely to find pages that would be otherwise buried or nearly impossible to find using search engines like Google or Yahoo. Though Yippy doesn’t have the ability to scour the every corner of the Deep Web (no search engine does), it is much more capable and efficient at finding pages for users with more obscure and niche tastes.

Duck Duck Go

With a name based on the popular kids’ game Duck Duck Goose, Duck Duck Go is a website that many find as approachable, user-friendly, and engaging as the game.

Duck Duck Go’s first priority is protecting user privacy. Many adults of all ages find themselves concerned over identity theft and hacking; these issues regularly appear on both local and national news. This search engine doesn’t reach into your history, email, or social media workings to drum up relevant information. Two totally different people can search the same term and get identical results.

The search engine also maintains a handy infinite scroll option (no need to click to other pages), reduced advertising spam, and prompts to help clarify a question.


First launched back in 2000, EntireWeb is a search engine that requires pages to submit their websites to it for free. This results in a much less crowded search space and guarantees those who submit are less likely to be drowned out by other competition. Queries can be submitted for regular Web search, image search, or real-time search.


Created just a few years ago in 2010, blekko (with a stylized lowercase “b”) is the search engine clearly inspired by Twitter. While Twitter (and now other social media sites) has “hashtags,” blekko has “slashtags.” When searching something in its database, blekko provides users with a series of related key words with which to narrow their search.

For instance, searching “celebrity news” on blekko turns up the slashtags for Top Results, Gossip, Magazine, and Latest. Blekko’s interface, which combines minimalist squares and a varied color palette, is considered very user-friendly.

blekko search engine results page example


Recent years have seen an uptick in people’s interest in engaging technology in an ethical manner. As corporations such as Google and Microsoft continue to grow steadily more powerful, people have been better scrutinizing where their money and attention go.

Goodsearch is a search engine for the charitable. Fueled by Yahoo, Goodsearch allows users to pick a cause of their choice; this can be a nonprofit organization or school. Upon selecting their target, Goodsearch will begin donating 50% of its revenue from that user to their cause. To date, Goodsearch has donated well over $11 million to a variety of sources. According to Goodsearch, the American Society for the Prevention of Cruelty to Animals (ASPCA) has received more than $50,000, and St. Jude Children’s Research Hospital has received more than $18,000 from the website.

goodsearch search engine donation exa,mple

In recent years, Goodsearch has earned the attention of many celebrities, including Zooey Deschanel, Jessica Biel, and Montel Williams.


Another search engine boasting enormous social and trust capital is GigaBlast. Founded in 2000, GigaBlast is, according to its LinkedIn page, the “leading clean-energy search engine.” An impressive 90% of its energy usage comes from harnessed wind energy, and the company maintains fewer than 10 employees.

Though it’s physically small, its power is big. GigaBlast indexes well over 10 billion pages of content. As environmental issues become more prominent in public consciousness, people are more likely to turn to sites like GigaBlast.


Though a relative unknown in the United States, Chinese search engine Baidu is a juggernaut on the international scene. It’s the top search engine in China (with 62% of search engine market share in 2013), and it is the second most popular search engine in the world.

“China’s Google,” as it is nicknamed, has been steadily growing since its incorporation in 2000, and it has recently begun courting English-speaking developers. Its features include searchable webpages, audio files, and images, a collaborative encyclopedia, and a bustling discussion forum. Thanks to its savvy smartphone integration, it has leapt past its immediate competitor, Qihoo 360, which now has only 21% of the Chinese search engine market share.


If Baidu manages to continue its domestic success abroad, it might not be long before it does become a household name in the United States.

In Conclusion

Once-popular search engines like and InfoSeek have either died out or are now sock-puppeted by their former competitors. InfoSeek attempted to charge for searches, failed, adjusted by depending on gaudy banner advertisements, became a generic “portal,” and was finally salvaged by Google. As AOL declined after its merger with Time Warner, so did its search engine. Now it is also part of Google.

Search engines in the preceding list still thrive because they capitalize upon some distinct corner of the market. For some, that market involves corporate social responsibility (Goodsearch, GigaBlast), social trends (Blekko), privacy concerns (Duck Duck Go), or utility (Yippy, Dogpile). Giants like Google, Bing, and Yahoo largely dominate the general market, so the others have had to specialize to survive.


Everything in our online life is indexed. Every idle tweet, status update, or curious search query feeds the Google database. The tech giant recently bought a leading artificial-intelligence research outlet, and it already has a robotics company on its books.

So what if Google, or Facebook, or any of the companies we entrust our information to, wanted to use our search histories to create an artificially intelligent robot?

Writer and director Alex Garland’s new film, Ex Machina, looks at just that. Garland has said in other interviews that he doesn’t want his film to be taken as a cautionary tale on the future of AI, which many scientists are worried about. But he told Quartz that there is something we should be worried about: the rise of large, unchecked organizations.

In Ex Machina there is a fictional search company called Blue Book, founded by a hirsute and reclusive genius, Nathan (played by Oscar Isaac). In his compound alone in an unspecified forest, Nathan has built the world’s first artificially intelligent robot, AVA (played by Alicia Vikander). He has invited one of Blue Book’s employees, Caleb (Domhnall Gleeson) to see if it can pass the Turing test, which essentially determines whether a computer can trick a human into believing she is having a conversation with another human.

Nathan uses Blue Book’s search-engine database to create the backbone for AVA’s brain. Every search query builds up a thought pattern that mirrors our own, Garland said: “the way our brains jump around and have non-sequiturs that aren’t really non-sequiturs.”

But it’s not just about the tech companies, the information we willingly give them, or even what they’re doing with them.

“I have a kind of genuine ambivalence towards the tech companies,” Garland said, “I see them in many ways as being similar to NASA in the 1960s, pushing our potential forward: They’re the guys going to the moon.”

But, Garland said, humans or companies without oversight tend to abuse the power they have.

“It’s not just about data collection per se,” he said, “it’s just about power and accountability, and checks and balances.”

Given their financial and technical resources, as well as the sheer amount of data they have on us, we may well see the first AI robot (whatever that means, since intelligence is a fuzzy term in this context) come from a company like Google. While such companies may or may not be doing anything nefarious with our data, Garland said, we are willingly handing it over to them, without knowing what their intentions are.

“That I definitely find scary,” Garland said.

And what will future artificial intelligence actually entail? This is one of the main issues Garland’s film contends with. One of the shortcomings of the Turing test is its inability to prove whether a computer is effectively imitating human intelligence, or is truly intelligent itself.

“We could recognize in the AI that it might be able to think in some respects better than us, but its experience of the world would be defined by what it thinks and what it encounters, and it would just be different to us.”

As to whether to consider such a machine sentient, Garland said, “sentience feels like a function of curiosity.” But he is cryptic about whether AVA is meant to be as intelligent as the humans in the film: “She does a very good job of seeing human life, but that doesn’t mean she is human life.”

Goodbye Blekko: Search Engine Joins IBM’s Watson Team Quiet for nearly two years, Blekko’s home page says its team and technology are now part of IBM’s Watson technology.

Add Blekko to the list of startup search engines that has come and now gone.

A message on the Blekko home page, shown above, says that “The blekko technology and team have joined IBM Watson!” The page redirects to a post on IBM’s Smarter Planet blog, where things get a bit confusing. Blekko’s home page message gives the impression of a complete acquisition, but IBM’s post mentions the acquisition of “certain technology.”

In our work to enhance the performance of cognitive computing systems, we’re constantly exploring new ways to identify, understand and make use of information from both public and private sources. Toward this end, we are excited about the acquisition of certain technology from Blekko, Inc, which closed this afternoon. This will provide access to additional content that can be infused in Watson-based products and services delivered by IBM and its partners.

We’ve reached out to Blekko CEO Rich Skrenta (who tweeted the news) for clarification on what IBM is acquiring, and we’ll update this if we learn more.

Blekko came out of stealth in 2008 with Skrenta promising to create a search engine with “algorithmic editorial differentiation” compared to Google. Its public search engine finally opened in 2010, launching with what the site called “slashtags” — a personalization and filtering tool that gave users control over the sites they saw in Blekko’s search results.

In 2011, Blekko went on the offensive against Google over spam, launching a “spam clock” website at that counted up the one million spammy web pages that Blekko claimed were being published online every hour. This was just as the debate on content farms and Google was really heating up, and in early 2011 Blekko even announced that it was banning content farms from its index. About three weeks later, Google announced the Panda algorithm update, its own effort to combat spam in search results — by no means a response to Blekko’s announcement, but certainly indirect validation that Blekko, and others who had been complaining about the amount of spam in Google’s search index, were on to something.

Blekko has remained out of the news for almost two years, though, with some of its last mentions being a search app for tablets and a joint funding round/layoffs.

Sleuthing Search Engine: Even Better Than Google? Memex, Developed by the U.S. Military, Is Helping to Track Down Online Criminals

Memex, Developed by the U.S. Military, Is Helping to Track Down Online Criminals
A tool called Memex, developed by the U.S. military’s research and development arm, is a search engine on steroids. ENLARGE
A tool called Memex, developed by the U.S. military’s research and development arm, is a search engine on steroids. PHOTO: DEFENSE ADVANCED RESEARCH
In the run-up to Super Bowl XLIX, a team of social workers in Glendale, Ariz. spent two weeks combing through local classified ads sites. They were looking for listings posted by sex traffickers.

Criminal networks that exploit women often advertise on local sites around events that draw large numbers of transient visitors. “It’s like a flood,” said Dominique Roe-Sepowitz, who headed the Glendale effort.

Dr. Roe-Sepowitz is director of the Office of Sex Trafficking Intervention Research at Arizona State University. She has worked for five years with authorities in Houston, Las Vegas and Phoenix to find and hunt down traffickers.

In the past, she painstakingly copied and pasted suspicious URLs into a document and looked for patterns that suggested a trafficking ring. This year, she analyzed criminal networks using visual displays from a powerful data-mining tool, one whose capabilities hint at the future of investigations into online criminal networks.

The program, a tool called Memex developed by the U.S. military’s research and development arm, is a search engine on steroids. Rather than endless pages of Web links, it returns sophisticated infographics that represent the relationships between Web pages, including many that a Google search would miss.


For instance, searching the name and phone number that appear in a suspicious ad would result in a diagram that showed separate constellations of dots, representing links to ads that contain the name, the phone number, or both. Such results could suggest a ring in which the same phone number was associated with different women. Clicking on a dot can reveal the physical location of the device that posted the ad and the time it was posted. Another click, and it shows a map of the locations from which the ads were posted. Capabilities like this make it possible to identify criminal networks and understand their operations in powerful new ways.

Unlike a Google search, Memex can search not only for text but also for images and latitude/longitude coordinates encoded in photos. It can decipher numbers that are part of an image, including handwritten numbers in a photo, a technique traffickers often use to mask their contact information. It also recognizes photo backgrounds independently of their subjects, so it can identify pictures of different women that share the same backdrop, such as a hotel room—a telltale sign of sex trafficking, experts say.

Also unlike Google, it can look into, and spot relationships among, not only run-of-the-mill Web pages but online databases such as those offered by government agencies and within online forums (the so-called deep Web) and networks like Tor, whose server addresses are obscured (the so-called dark Web).

Since its release a year ago, Memex has had notable successes in sex-trafficking investigations. New York County District Attorney Cyrus Vance said Memex has generated leads in 20 investigations and has been used in eight trials prosecuted by the county’s sex-trafficking division. In a case last June, Mr. Vance said, Memex’s ability to search the posting times of ads that had been taken down helped in a case that resulted in the sentencing of a trafficker to 50 years to life in prison.

The creator of Memex is Christopher White, a Harvard-trained electrical engineer who runs big-data projects for the Defense Advanced Research Projects Agency, or Darpa. The Defense Department’s center of forward-looking research and development, Darpa put between $10 million and $20 million into building Memex. (The precise amount isn’t disclosed.) Although the tool can be used in any Web-based investigation, Dr. White started with the sex trade because the Defense Department believed its proceeds finance other illegal activities.

Memex is part of a wave of software tools that visualize and organize the rising tide of online information. Unlike many other tools, though, it is free of charge for those who want to download, distribute and modify. Dr. White said he wanted Memex to be free “because taxpayers are paying for it.” Federal agencies have more money to spend, but local law-enforcement agencies often can’t afford the most sophisticated tools, even as more criminal activity moves online.

Among tools used by law-enforcement agencies, Memex would compete with software from Giant Oak, Decision Lens and Centrifuge Systems. The leader in the field is Palantir Technologies, whose software costs $10 million to $100 million per installation and draws from the user’s proprietary databases rather than from the Web. Palantir didn’t immediately reply to a request for comment.

Advertisements posted by sex traffickers amount to between $90,000 and $500,000 daily in total revenue to a variety of outlets, according to Darpa.

Dr. White recently hired several economists to perform a large-scale study of the sex market and its finances, using Memex data along with other industry research.
Memex and similar tools raise serious questions about privacy. Marc Rotenberg, president and executive director of the Electronic Privacy Information Center in Washington, D.C., said, that when law-enforcement authorities start using powerful data-mining software, “the question that moves in the background is how much of this is actually lawful.” Data-visualization tools like Memex enable enforcers to combine vast amounts of public and private information, but the implications haven’t been fully examined, he said.

Dr. White said he drew a “bright line” around online privacy, designing Memex to index only publicly available information. In anonymous networks like Tor, which hosts many sex ads, Memex finds only the public pages. But since the tool isn’t technically controlled by Darpa, independent developers could add capabilities that would make it more invasive, he acknowledged.

Another big question is whether sex traffickers and other malefactors will thwart Memex by changing their tactics. For example, they might blur out photo backgrounds if they knew law enforcement officials were searching for them. For this reason, law-enforcement users will withhold some of the proprietary data they developed while using Memex. “We want it to be free,” said Dr. White. “But there’s always this tension between knowing what people are doing…and alerting them to that fact so they change their behavior.”

Dr. White is starting to test other uses for Memex with law enforcement and government partners, he said, including recognizing connections between shell companies, following the chains of recruitment for foreign fighters drawn to the terrorist group ISIS, mapping the spread of epidemics, and following ads for labor and goods to understand supply chains involved in money laundering.

Bing Testing Recipe Data In Search Snippets

Bing is now testing showing recipe answers including the recipe overview, ingredients, steps and related recipes directly in the search results snippets.

Jennifer Slegg spotted this test on Bing this morning and shared a few screen shots of it in action. We cannot replicate this on our end yet.



These special snippets are likely coming from the recipe schema markup deployed by this web site.

Bing is probably testing to see if searchers like the interface in the search results or if they prefer to click through to the main search results.

Google has also done a lot around recipes in the search results in the past and has also tested showing schema data directly in the snippets.

Quertle introduces new search engine for biomedical, life science and healthcare professionals

Linguistic engine outperforms current search solutions to deliver contextually relevant results for biomedical literature and patents, saving time and money

Biomedical and health IT solution developer Quertle LLC today released Quetzal®Search and Communication ( biomedical search engine. Built on Quertle’s Quantum Logic Linguistic™ technology, Quetzal is specifically optimized for biomedical, life science and healthcare professionals. The new tool drastically improves search processes by quickly delivering contextually relevant results, eliminating frustration and wasted time searchers experience with current solutions.

A National Library of Medicine award winner used in 191 countries, Quetzal’s linguistic technology not only focuses on relevant results, but also simplifies the process – using optimized ontologies to eliminate reliance on complicated Boolean searches, and using separate author, journal and affiliation entries to avoid confusing results. Quetzal’sPower Term™ functionality allows users to find all members of a category such as “diseases” without cluttering results with hits from general terms (e.g., disease, syndrome), resulting in lists that specifically answer questions such as: “Which diseases are affected by caffeine?”

“Researchers need an easy way to find the important content they want with assurance that they haven’t missed critical documents. With traditional search engines, users spend 95% of their time searching and only 5% reviewing the relevant material,” said Jeffrey D. Saffer, Ph.D., president of Quertle. “Our patent-pending technology reverses those percentages with a unique combination of linguistic and statistical methods to quickly uncover relevant results and minimize risk of missed materials.”

Quetzal Search and Communication includes unique features such as automated key concept extraction, embedded private journal clubs, useful filtering options, and instant searches for entire classes of entities – providing quick access to the information that matters most to users. Quetzal content includes PubMed, PubMed Central full text, patent grants and applications, AHRQ Treatment Protocols, NIH grants, TOXLINE and relevant news sources.

Benefits of Quetzal include:

  • Presentation of author statements pertinent to user query with terms highlighted in context, making it easier to see why results are relevant
  • Single-click access to full abstracts without leaving the results page
  • Easy-to-use powerful filters – including Quetzal’s proprietary Key Concepts filter – automatically identifying important concepts, creating time-saving means to hone in on points of significant interest
  • Direct access to over 10 million free PDFs plus easy access through users’ library subscriptions, significantly improving productivity
  • Built-in note-taking feature simplifying user notation of crucial points
  • Private, secure Journal Club discussions providing group interactions

Quertle offers three versions of the Quetzal solution:

  • Basic (Free) – Enhanced linguistic searching of PubMed documents to find relevant results; ideal for undergraduate students and occasional searchers
  • Professional – Includes additional content, sources and filters, Journal Club and more; essential resource for physicians, researchers, other life science and healthcare professionals
  • Advanced – Most powerful version, includes patents and full-text searching for when missing key information would be costly; also appropriate for information professionals