慶應SFC 2015年 環境情報学部 英語 大問2 全文

 On November 2, 2010, Facebook’s American users were subject to an ambitious experiment in civic engineering: Could a social network get people to vote in that day’s elections?

 The answer was yes.

 The way to [31](1. nudge 2. shake 3. stroke) bystanders to the voting booths was simple. It consisted of a graphic containing a link for looking up voting places, a button to click to announce that you had voted, and the profile photos of up to six Facebook friends who had indicated they’d already done the same. [32](1. Against 2. With 3. Beyond) Facebook’s cooperation, the political scientists who conducted the study planted that graphic in the newsfeeds of tens of millions of users. Other groups of Facebook users were shown a [33](1. generic 2. generous 3. genetic) get-out-the-vote message or received no voting reminder at all. Then the researchers compared their subjects’ names with the day’s actual voting records to measure how much their voting prompt increased participation.

 Overall, users who were notified of their friends’ voting were 0.39 percent more likely to vote than those in the other group, and any resulting decisions to vote also appeared to spread to the behavior of close Facebook friends, even if those people hadn’t received the original message. That small increase in voting rates [34](1. amounted to 2. contrasted with 3. passed up) a lot of new votes. The researchers concluded that their Facebook graphic directly mobilized 60,000 voters, and, thanks to the ripple effect, ultimately caused an additional 340,000 votes to be cast that day.

 Now consider a hypothetical, [35](1. coolly 2. hotly 3. warmly) contested future election. Suppose that the CEO of Facebook personally favors whichever candidate you don’t like. He arranges for a voting prompt to appear within the newsfeeds of tens of millions of active Facebook users—but unlike in the 2010 experiment, the group that will not receive the message is not chosen at random. Rather, he makes use of the fact that Facebook “likes” can predict political views and political party affiliation, even [36](1. before 2. beneath 3. beyond) the many users who include that information in their profiles already. With that knowledge, he could choose not to change the feeds of users who don’t agree with his views. This could then [37](1. flap 2. flip 3. flop) the outcome of the election. Should the law constrain this kind of behavior?

 The scenario imagined above is an example of digital gerrymandering. All sorts of factors [38](1. contend with 2. contrast with 3. contribute to) what Facebook or Twitter present in a feed, or what Google or Bing show us in search results. Our expectation is that those companies will provide open access to others’ content and that the variables in their processes just help [39](1. field 2. wield 3. yield) the information we find most relevant. Digital gerrymandering occurs when a site instead distributes information in a manner that serves its own political agenda. This is possible on any service that personalizes what users see or the order in which they see it, and it’s increasingly easy to do.

 There are plenty of reasons to regard digital gerrymandering as so dangerous that no right-thinking company would attempt it. But none of these businesses actually promise [40](1. accuracy 2. neutrality 3. partiality). And they have already shown themselves willing to leverage their awesome platforms to attempt to influence policy. In January 2012, for example, Google blacked out its homepage “doodle” (the logo graphic at the top of the page) as a protest [41](1. against 2. by 3. for) the pending Stop Online Piracy Act (SOPA) in the US, which they thought would cause censorship. The altered logo linked to an official blog [42](1. entrance 2. entree 3. entry) asking Google users to contact Congress to complain; SOPA was ultimately abandoned, just as Google and many others had wanted. A social-media or search company looking to take the [43](1. first 2. last 3. next) step and attempt to create a favorable outcome in an election would certainly have the means.

 So what’s stopping that from happening? The most important fail-safe is the threat that a significant number of users, outraged by a betrayal of trust, would start using different services, hurting the company’s income and reputation. [44](1. However 2. Meanwhile 3. Moreover), although a Google doodle lies in plain view, newsfeeds and search results have no standard form. They can be subtly [45](1. teased 2. tickled 3. tweaked) without anyone knowing. Indeed, in our get-out-the-vote hypothetical situation above, the people with the most reason to complain would be those who weren’t given the prompt and may never know it existed. Not only that, but the policies of social networks and search engines already state that the companies can change their newsfeeds and search results however they like. An effort to change voter participation could be covered by the existing user agreements and require no special notice to users.

 [46](1. At the same time 2. By the way 3. More to the point), passing new laws to prevent digital gerrymandering would be a bad idea. People may be due the benefits of a democratic electoral process, but in the United States, both people and corporations also have a First Amendment right to free speech—and to present their content as they [47](1. know 2. see 3. wish) fit. Meddling with how a company gives information to its users, especially when the information is not false, is asking for trouble.

 There’s a better solution available: requiring web companies entrusted with personal data and preferences to act as “information fiduciaries.” Just as a doctor or lawyer is not allowed to use information about his or her [48](1. patents 2. patience 3. patients) or clients for outside purposes, web companies should also be prohibited from doing this.

 As things stand, web companies are simply bound to follow their own privacy policies. Information fiduciaries would have to do more. For example, they might be required to keep information about when the personal data of their users is shared with another company, or is used in a new way. They would provide a way for users to switch to unadulterated search results or newsfeeds to see how that content would appear if it were not personalized. And, most important, information fiduciaries would promise not to use any formulas of personalization based on their own political goals.

 Four decades ago, another emerging technology had Americans worried about how it might be manipulating them. In 1974, there was a panic over the possibility of subliminal messages in TV advertisements. As a result, the Federal Communications Commission prohibited that kind of communication. There was a [49](1. floor 2. foundation 3. foot) for that rule; historically, broadcasters have accepted a responsibility to be fair in exchange for licenses to use the public airwaves. The same duty of audience protection ought to be brought to today’s dominant medium. As more and more of what shapes our views and behaviors comes from invisible, artificial-intelligence-driven processes, the worst-case [50](1. scenarios 2. scenes 3. situations) should be placed off-limits in ways that don’t become restrictions on free speech. Our information intermediaries can keep their sauces secret, inevitably advantaging some sources of content and disadvantaging others, while still agreeing that some ingredients are poison—and must be off the table.

Note: fiduciary:受託者

AO入試・小論文に関するご相談・10日間無料添削はこちらから

「AO入試、どうしたらいいか分からない……」「小論文、添削してくれる人がいない……」という方は、こちらからご相談ください。
(毎日学習会の代表林が相談対応させていただきます!)

コメントを残す

メールアドレスが公開されることはありません。 * が付いている欄は必須項目です