慶應SFC 2018年 環境情報学部 英語 大問2 全文

 When science fiction writers first imagined robot invasions, the idea was that bots would become smart and powerful enough to take over the world by force, whether on their own or as directed by some evildoer. In reality, something only slightly less scary is happening. Robots are getting better, every day, at impersonating humans. When directed by opportunists, malefactors, and sometimes even nation states, they pose a [46](1. false 2. minimal 3. particular) threat to democratic societies, which are premised on being open to the people.

 Robots posing as people have become a menace. Philip Howard, who runs the Computational Propaganda Research Project at Oxford, studied the deployment of propaganda bots during voting on Brexit, and the recent American and French presidential elections. Twitter is especially [47](1. encouraged 2. distorted 3. trusted) by its millions of robot accounts; during the French election, it was principally Twitter robots who were trying to make #MacronLeaks into a scandal. Facebook has admitted it was essentially hacked during the American election in November. In Michigan, Mr. Howard notes, “junk news was shared just as widely as professional news in the days leading [48](1. up 2. down 3. over) to the election.”

 Robots are also being used to attack the democratic features of the administrative state. This spring, the US put its proposed revocation of net neutrality up for public comment. In previous years, such proceedings attracted millions of commentators. This time, someone with an agenda but no actual public support unleashed robots who impersonated — via stolen identities – hundreds of thousands of people, [49](1. mimicking 2. conquering 3. flooding) the system with fake comments against federal net neutrality rules.

 To be sure, today’s impersonation-bots are different from the robots imagined in science fiction: they aren’t sentient, don’t carry weapons, and don’t have physical bodies. Instead, fake humans just have whatever is necessary to make them seem human enough to “pass”: a name, perhaps a virtual appearance, a credit card number and, [50](1. if necessary 2. therefore 3. in effect), a profession, birthday, and home address. They are brought to life by programs or scripts that give one person the power to imitate thousands.

 The problem is almost certain to get worse, spreading to even more areas of life as bots are trained to become better at mimicking humans. [51](1. Conferring 2. Given 3. Ignoring) the degree to which product reviews have been swamped by robots, which tend to hand out five stars with abandon, commercial sabotage in the form of negative bot reviews is not hard to predict.

 So far, we’ve been [52](1. content 2. excited 3. dissuaded) to leave the problem to the tech industry, where the focus has been on building defenses, usually in the form of Captchas (“completely automated public Turing test to tell computers and humans apart”), those annoying “type this” tests to prove you are not a robot. But leaving it all to industries is not a long-term solution. For one thing, the defenses don’t actually deter impersonation bots, but reward whoever can beat them. And perhaps the greatest problem for a democracy is that companies like Facebook and Twitter lack a serious financial incentive to do anything about matters of public concern. Twitter estimates at least 27 million probably fake accounts; researchers suggest the real number is closer to 48 million, [53](1. when 2. so 3. yet) the company does little about the problem.

 The ideal anti-robot campaign would employ a mixed technological and legal approach. Improved robot detection might help us find the robot masters or [54](1. inadvertently 2. potentially 3. secretly) help national security unleash counterattacks, which can be necessary when attacks come from overseas. There may be room for deputizing private parties to hunt down bad robots. A simple legal remedy would be a “Blade Runner” law that makes it illegal to deploy any program that hides its real identity to pose as a human. Automated processes should be required to state, “I am a robot.” When dealing with a fake human, it would be nice to know.

 Using robots to fake support, steal tickets, or crash democracy really is the kind of evil that science fiction writers were warning us about. The use of robots takes advantage of the fact that political campaigns, elections, and even open markets make humanistic assumptions, [55](1. ensuring 2. providing 3. trusting) that there is wisdom or at least legitimacy in crowds and value in public debate. But when support and opinion can be manufactured, bad or unpopular arguments can win not by logic but by a novel, dangerous form of force the ultimate threat to every democracy.

AO入試・小論文に関するご相談・10日間無料添削はこちらから

「AO入試、どうしたらいいか分からない……」「小論文、添削してくれる人がいない……」という方は、こちらからご相談ください。
(毎日学習会の代表林が相談対応させていただきます!)

コメントを残す

メールアドレスが公開されることはありません。 * が付いている欄は必須項目です