Stop Using Google Trends — Medium

Stop Using Google Trends — Medium

Alternatively titled ‘Be aware of context, and maybe start using Google AdWords’ Instead

Google Trends is a very interesting product, as it gives us real-time data on how people are using Google. Google is the Address Bar of the Internet, so if you need information on a topic, just type in “Euros” and you’ll have the scores and times of every game of the UEFA Euros Championship. Google can then track that interest in a topic and we can see it. But what shouldn’t you use Google Trends for? Well, until people start using it appropriately, everything.

Google Trends is that it reports search numbers relatively within the date-range and in context of other trends.

Trends doesn’t tell us, all it does is give us a nice graph with a huge peak.

I’m disappointed that this is how data is being used, and really drives home the need for people to understand the data before they use it incorrectly. Google Trends is an interesting tool, but please do a bit more research before using it.

Google Tests Feature That Lets Media Companies, Marketers Publish Directly to Search Results – WSJ

Google Tests Feature That Lets Media Companies, Marketers Publish Directly to Search Results – WSJ

Google is experimenting with letting big publishers publish directly into search results, instead of going through SEO and indexing.

Google is experimenting with a new feature that allows marketers, media companies, politicians and other organizations to publish content directly to Google and have it appear instantly in search results.

..

Google has built a Web-based interface through which posts can be formatted and uploaded directly to its systems. The posts can be up to 14,400 characters in length and can include links and up to 10 images or videos. The pages also include options to share them via Twitter, Facebook or email.

Each post is hosted by Google itself on a dedicated page, and appears in a carousel in results pages for searches related to their authors for up to a week, a Google spokeswoman said. After seven days, the posts remain live but won’t be surfaced in search results. Rather, they can be accessed via a link.

The Google spokeswoman said the experimental feature is separate from Google’s Accelerated Mobile Pages program, which aims to speed up online content by streamlining the code that powers Web pages.

With the AMP program, Google also “caches” pages, or saves copies of them on its own systems, in order to deliver them more quickly to users. AMP doesn’t host content directly, however, whereas Google’s new search feature does.

Google’s tests of the new posting tool comes at a time when media companies, marketers and organizations of all types are increasingly distributing content by publishing directly to major online platforms, instead of driving users back to their own websites and properties.

Facebook has an Instant Articles feature, for example, which lets anyone host their content directly on the social network, provided they adhere to its content policies. Facebook also overhauled its own “Notes” feature in September 2015 which—similarly to Google’s new feature—offers a Web interface through which users can publish their content directly to the social network.

Apple also unveiled a Web-based publishing tool that allows users to arrange and publish content directly to its Apple News application.

The Google spokeswoman emphasized that the new tool remains in an experimental phase, and wouldn’t provide details on if or when it may be opened up to more authors. Google is currently testing it with a range of different type of partners, she said, but wouldn’t disclose exactly how many.

The current state of machine intelligence 2.0 – O’Reilly Media

The current state of machine intelligence 2.0 – O’Reilly Media

A year ago today, I published my original attempt at mapping the machine intelligence ecosystem. So much has happened since.

I spent the last 12 months geeking out on every company and nibble of information I can find, chatting with hundreds of academics, entrepreneurs, and investors about machine intelligence. This year, given the explosion of activity, my focus is on highlighting areas of innovation, rather than on trying to be comprehensive.

Despite the noisy hype, which sometimes distracts, machine intelligence is already being used in several valuable ways. Machine intelligence already helps us get the important business information we need more quickly, monitors critical systems, feeds our population more efficiently, reduces the cost of health care, detects disease earlier, and so on.

The two biggest changes I’ve noted since I did this analysis last year are
(1) the emergence of autonomous systems in both the physical and virtual world
and (2) startups shifting away from building broad technology platforms to focusing on solving specific business problems.

Investigating the algorithms that govern our lives – Columbia Journalism Review

Investigating the algorithms that govern our lives – Columbia Journalism Review

Just an old-school style investigative look into technology, data, algorithms and humanity.

As online users, we’ve become accustomed to the giant, invisible hands of Google, Facebook, and Amazon feeding our screens. We’re surrounded by proprietary code like Twitter Trends, Google’s autocomplete, Netflix recommendations, and OKCupid matches. It’s how the internet churns. So when Instagram or Twitter, or the Silicon Valley titan of the moment, chooses to mess with what we consider our personal lives, we’re reminded where the power actually lies. And it rankles.

While internet users may be resigned to these algorithmic overlords, journalists can’t be. Algorithms have everything journalists are hardwired to question: They’re powerful, secret, and governing essential parts of society. Algorithms decide how fast Uber gets to you, whether you’re approved for a loan, whether a prisoner gets parole, who the police should monitor, and who the TSA should frisk.

Algorithms are built to approximate the world in a way that accommodates the purposes of their architect, and “embed a series of assumptions about how the world works and how the world should work,” says Hansen.

It’s up to journalists to investigate those assumptions, and their consequences, especially where they intersect with policy. The first step is extending classic journalism skills into a nascent domain: questioning systems of power, and employing experts to unpack what we don’t know. But when it comes to algorithms that can compute what the human mind can’t, that won’t be enough. Journalists who want to report on algorithms must expand their literacy into the areas of computing and data, in order to be equipped to deal with the ever-more-complex algorithms governing our lives.

The reporting so far

Few newsrooms consider algorithms a beat of their own, but some have already begun this type of reporting.

Algorithms can generally be broken down into three parts: the data that goes in; the “black box,” or the actual algorithmic process; and the outcome, or the value that gets spit out, be it a prediction or score or price. Reporting on algorithms can be done at any of the three stages, by analyzing the data that goes in, evaluating the data that comes out, or reviewing the architecture of the algorithm itself to see how it reaches its judgements.

Currently, the majority of reporting on algorithms is done by looking at the outcomes and attempting to reverse-engineer the algorithm, applying similar techniques as are used in data journalism. The Wall Street Journal used this technique to find that Staples’ online prices were determined by the customer’s distance from a competitor’s store, leaving prices higher in rural areas. And FiveThirtyEight used the method to skewer Fandango’s movie ratings—which skewed abnormally high, rarely dipping below 3 stars—while a ProPublica analysis suggested that Uber’s surge pricing increases cost but not the supply of drivers.

 

….

Can an algorithm be racist?

“Algorithms are like a very small child,” says Suresh Venkatasubramanian. “They learn from their environment.”

Venkatasubramanian is a computer science professor at the University of Utah. He’s someone who thinks about algorithmic fairness, ever since he read a short story by Cory Doctorow published in 2006, called “Human Readable.” The story takes place in a future world, similar to ours, but in which all national infrastructure (traffic, email, the media, etc.) is run by “centralized emergent networks,” modeled after ant colonies. Or in other words: a network of algorithms. The plot revolves around two lovers: a network engineer who is certain the system is incorruptible, and a lawyer who knows it’s already been corrupted.

“It got me thinking,” says Venkatasubramanian. “What happens if we live in a world that is totally driven by algorithms?”

He’s not the only one asking that question. Algorithmic accountability is a growing discipline across a number of fields. Computer scientists, legal scholars, and policy wonks are all grappling with ways to identify or prevent bias in algorithms, along with the best ways to establish standards for accountability in business and government. A big part of the concern is whether (and how) algorithms reinforce or amplify bias against minority groups.

Algorithmic accountability builds on the existing body of law and policy aimed at combatting discrimination in housing, employment, admissions, and the like, and applies the notion of disparate impact, which looks at the impact of a policy on protected classes rather than itsintention. What that means for algorithms is that it doesn’t have to be intentionally racist to have racist consequences.

Algorithms can be especially susceptible to perpetuating bias for two reasons. First, algorithms can encode human bias, whether intentionally or otherwise. This happens by using historical data or classifiers that reflect bias (such as labeling gay households separately, etc.). This is especially true for machine-learning algorithms that learn from users’ input. For example, researchers at Carnegie Mellon University found that women were receiving ads for lower-paying jobson Google’s ad network but weren’t sure why. It was possible, they wrote, that if more women tended to click on lower-paying ads, the algorithm would learn from that behavior, continuing the pattern.

Second, algorithms have some inherently unfair design tics—many of which are laid out in a Medium post, “How big data is unfair.” The author points out that since algorithms look for patterns, and minorities by definition don’t fit the same patterns as the majority, the results will be different for members of the minority group. And if the overall success rate of the algorithm is pretty high, it might not be noticeable that the people it isn’t working for all belong to a similar group.

To rectify this, Venkatasubramanian, along with several colleagues, wrote a paper on how computer scientists can test for bias mathematically while designing algorithms, the same way they’d check for accuracy or error rates in other data projects. He’s also building a tool for non-computer scientists, based on the same statistical principles, which scores uploaded data with a “fairness measure.” Although the tool can’t check if an algorithm itself is fair, it can at least make sure the data you’re feeding it is. Most algorithms learn from input data, Venkatasubramanian explains, so that’s the first place to check for bias.

Much of the reporting on algorithms thus far has focused on their impact on marginalized groups. ProPublica’s story on The Princeton Review, called “The Tiger-Mom Tax,” found that Asian families were almost twice as likely to be quoted the highest of three possible prices for an SAT tutoring course, and that income alone didn’t account for the pricing scheme. A team of journalism students at the University of Maryland, meanwhile, found that Uber wait times were longer in non-white areas in DC.

Bias is also the one of the biggest concerns with predictive policing software like PredPol, which helps police allocate resources by identifying patterns in past crime data and predicting where a crime is likely to happen. The major question, says Maurice Chammah, a journalist at The Marshall Project who reported on predictive policing, is whether it will just lead to more policing for minorities. “There was a worry that if you just took the data on arrests and put it into an algorithm,” he says, “the algorithm would keep sending you back to minority communities.”

Internet Society | Internet Issues, Technology, Standards, Policy, Leadership – Global_Internet_Report_2014.pdf

Internet Society | Internet Issues, Technology, Standards, Policy, Leadership – Global_Internet_Report_2014.pdf

More than two decades ago, the Internet Society was formed to support the open development, evolution, and use of the Internet for the benefit of all mankind. Over the years, we have pursued that task with pride. We continue to be driven by the hope and promise of the benefits the Internet can bring to everyone.

In doing so, the Internet Society has fostered a diverse and truly global community. Internet Society Chapters and members represent the people of the world and the many and varied ways they use the Internet to enrich the lives of themselves and their peers. They use the Internet to create communities, to open new economic possibilities, to improv lives, and to participate in the world. We are inspired by their stories of innovation, creativity, and collaboration.

Thanks to the Internet’s own success, we are now in an increasingly complex era where the stakes are much higher than before, and potential threats to the Internet’s core principles loom larger. To protect your ability to use the Internet for your needs – to keep it open and sustainable – we must do more to measure impacts and present the strengths of the open Internet model in more compelling ways, to convince policy makers, influencers, and the general public of the importance of our mission.

To this end, I am pleased to launch this, the first in an annual series of Global Internet Reports. With this report, the Internet Society introduces a new level of integrated analysis, measurement, and reporting to Internet governance discussions at all levels.

The Global Internet Reports will become a showcase of topics that are at the heart of the Internet Society’s work about the future of the Internet, weaving together the many threads of the diverse multistakeholder Internet community.