China’s big brother: artificial intelligence
December 15, 2017 3:14 AM   Subscribe

China’s big brother: how artificial intelligence is catching criminals and advancing health care Zhu Long, co-founder of pioneering Yitu Technologies, whose facial-recognition algorithms have logged 1.8 billion faces and caught criminals across China, says AI will change the world more than the industrial revolution.

I'm now busy worrying over the privacy/surveillance implications, if most of us will lose our jobs or if humans will become redundant altogether. I am a woolly-headed humanities grad so am not well-versed about A.I. or coding. Probably one of the humans headed straight towards obsolescence :(

I did find this reassuring: Chinese woman offered refund after facial recognition allows colleague to unlock iPhone X
posted by whitelotus (39 comments total) 18 users marked this as a favorite
 
Chinese authorities are … identifying persons who deviate from what they determine to be ‘normal thought’ and then surveilling them.
That's it; we're done. If somebody here in the US doesn't draw a hard line to prohibit this, the next Administration is going to start targeting dissidents (as THEY define that) for internment or suppression. And nobody's going to draw that line.

Obviously, I don't find it reassuring that somebody got a refund on their iPhone purchase.
posted by Kirth Gerson at 3:56 AM on December 15, 2017 [12 favorites]


This article is pretty soft-core, and it's expected from SCMP which is controlled by Jack Ma, the billionaire owner of the data conglomerate Alibaba, one of the major corporate forces in China's [ ].

I could have said more about this topic but I'm afraid of letting my thoughts out.
posted by runcifex at 4:11 AM on December 15, 2017 [2 favorites]


China’s big brother: how artificial intelligence is catching criminals and advancing health care

I think the newspaper's misusing the concept of "big brother," or it's just an early phase, say 1982
posted by chavenet at 4:25 AM on December 15, 2017


The nexus trilogy by Ramez Naam isn’t the greatest writing in the world but holy shit is it prescient.
posted by Annika Cicada at 4:30 AM on December 15, 2017


I could have said more about this topic but I'm afraid of letting my thoughts out.

I for one have only normal thoughts about this topic.
posted by justsomebodythatyouusedtoknow at 4:37 AM on December 15, 2017 [14 favorites]


runcifex: I am well aware of SCMP's current ownership. Now I am curious about your thoughts...I hope that is not too personal.
posted by whitelotus at 4:54 AM on December 15, 2017


It's pretty messed up for this article to say "catching criminals" when it means "accusing people of crimes." When you scan a database of a billion people for "likely" suspects, it's easy to convincingly, accidentally frame innocent people. 1-in-a-million coincidences happen 1,000 times per scan. And if you do point the finger at a coincidentally-guilty-looking person, human psychology will fill in the rest of the proof.

Of course statistical evidence, like DNA evidence, can be used correctly to convict or exonerate people. But we should be super skeptical of claims like that about new technologies -- a lot of the "scientific" forensic evidence that we accept in US criminal courts is pure nonsense, and many new tools will be as well.
posted by john hadron collider at 5:07 AM on December 15, 2017 [22 favorites]


We will all go underground by default while bot decoys express approved sentiments in public in our names. Facebook will be all fake accounts liking approved stuff. "I love my family, puppies, sunsets, and America. I feel blessed." And when the administration changes -- when sunsets and puppies are no longer approved -- you pull it all down, claim you didn't know there was a bot impersonating you, and release a new bot expressing the new line in your name. "I love my family, kittens, sunrises, and America. I feel blessed."
posted by pracowity at 5:14 AM on December 15, 2017 [5 favorites]


I guess I had better read Theory And Practice Of Oligarchical Collectivism while I still can. Is it on Kindle yet?
posted by thelonius at 5:17 AM on December 15, 2017 [3 favorites]


Dragonfly Eye has even identified the skull of a victim five years after his murder, in Zhejiang province.

Sooooo... how many skulls was it trained on to be able to achieve this remarkable feat?
posted by clawsoon at 5:30 AM on December 15, 2017 [6 favorites]



Dragonfly Eye has even identified the skull of a victim five years after his murder, in Zhejiang province.

Sooooo... how many skulls was it trained on to be able to achieve this remarkable feat?


I am sure they had plenty of data from their political prison camps.
posted by KaizenSoze at 6:13 AM on December 15, 2017 [1 favorite]


See also China 'social credit', a sort of obedience score for every citizen. Generated by an unknown algorithm from all the surveillance data they can collect. Apparently it factors in everything from traffic tickets to whether you've snitched on your neighbours recently. Having a good score makes you eligible for good jobs and membership in the Party.

Perhaps the terrifying sight of China going so far down this road so quickly will give some Western governments something to think about as they consider how far they've already gone. In my youth, arbitrary demands for "papers, please" was the sort of thing the bad guys did; we had the evils of totalitarianism kept fresh in our minds by a plethora of old war movies. It's worn off somewhat. Having a bold new dystopian nightmare looming large in world politics could serve to refresh people's memories.
posted by sfenders at 6:18 AM on December 15, 2017 [7 favorites]


Metafilter: I could have said more about this topic but I'm afraid of letting my thoughts out.
posted by Coventry at 6:45 AM on December 15, 2017 [2 favorites]


So when I read Maggie Shen King's "An Excess Male", I thought the subplot about programming predictive AI to root out thoughtcrime was pretty squarely in SF territory. I guess not.
posted by inconstant at 6:57 AM on December 15, 2017


Being reminded of Theory and Practice of Oligarchic Collectivism, from the Wikipedia page, speaking of a possible real-world model for the fictional book in 1984 (emphasis mine):
However, the bureaucratic collectivist theory was formulated not by Trotsky, but by some of his followers mainly in the United States who dissented from his view of the Soviet Union as a degenerated workers' state. These theorists, such as Max Schachtman, saw the Soviet Union, along with Nazi Germany and Fascist Italy, as representing a new type of society, neither capitalist nor socialist, characterized by direct, integrated political and economic rule by a new ruling class of totalitarian state bureaucrats. In the era of the Molotov-Ribbentrop Pact and the lead-up to World War II, this theory claimed that the apparently opposed Fascist and Stalinist social systems were in effect identical in essence, reminiscent of Goldstein's claims that Oceania, Eurasia and Eastasia are actually identical and only differ in the justifying ideology.
I didn't remember this at all, but then again I read 1984 decades ago, with a much more naive political understanding, so I probably wouldn't have caught on to it in the first place. But this interpretation of Orwell revives my belief in his prescience. Now I can see the book not so much as warning of a 20th century-style, grim and grey Stalinistic dictatorship, but presaging the very world we are hurtling toward today. So, maybe it should have been 2024, not 1984. A forty year prediction error is pretty small, in the scheme of things.

It's quite apparent that there's a simple reductivist explanation for the Trump-Russia story that is not yet in the public consciousness, one that is chillingly congruent with developments such as those taking place in China. Namely, that the political phenomenon we're witnessing with Trumpism is just one facet of a global counter-revolutionary movement aimed at establishing world-wide "oligarchic collectivism", a world in which the 18th century dream of Liberty is shrunk down to mean liberty only for those who, as George Carlin liked to say, are the "real owners" of this place. In short, it's a new aristocracy revamped with a modern technocratic gloss.

What Orwell and his interpreters failed to imagine, and what Huxley got much closer to in Brave New World, was that such totalitarianism would not have to look like a dour Soviet police state. Rather, it could have a much more hedonistic, individualistic outward appearance in which a synchretic and nationalistic pseudo-religiosity can be merged with sex, drugs and rock-n-roll and advanced hyper-media to control the masses via an array of carefully crafted bamboozlements and mindfuckery.

The technology being rolled out in China is exactly what's needed for this agenda: it will give people the sense of a "freedom-ness", similar to Colbert's "truthiness": not the real thing, but a good enough simulation to keep the masses from rising up. At the same time, it provides a technological infrastructure for a totalitarianism that Stalin could only have dreamed of.
posted by mondo dentro at 7:38 AM on December 15, 2017 [11 favorites]


> “Europe is the toughest market because people are very concerned about privacy there,” Zhu says.

/me makes note.
posted by at by at 8:35 AM on December 15, 2017 [2 favorites]


I see the artificial intelligence hype bullshit train has pulled in to China Station.

It's facial recognition for fuck's sake.
posted by GallonOfAlan at 8:43 AM on December 15, 2017 [1 favorite]


It's facial recognition for fuck's sake.

That's like saying a bomb is just oxidation for fuck's sake.

It's facial recognition, which is going to keep getting better and better, combined with deep learning algorithms, and integrated into the internet. All encoding authoritarian policies.
posted by mondo dentro at 8:48 AM on December 15, 2017 [7 favorites]


Still not AI. Going to be able to ask it to explain what's on the screen any time in the 50 years? I doubt it.
posted by GallonOfAlan at 8:51 AM on December 15, 2017


But this interpretation of Orwell revives my belief in his prescience.

Orwell's politics were profoundly shaped by his Spanish Civil War experience. He was attached to the anarchist POUM party's militia, and experienced the period of rule by worker's collectives in Barcelona. Then he barely escaped with his life when the pro-Stalinist factions consolidated control and purged the POUM. Animal Farm is more or less an allegory for this story. I don't know much about the details of POUM's political theory, but socialist anarchism seems, at least, not inconsistent with your quote about the bureaucratic collectivist theory.
posted by thelonius at 9:04 AM on December 15, 2017 [1 favorite]


Still not AI. Going to be able to ask it to explain what's on the screen any time in the 50 years? I doubt it.

Oh, so if you mean that it's nowhere near solving the hard problem of consciousness with hardware and software... that's certainly true. It's not even close. But I suspect that true AI is the last thing that authoritarians who lust after the kind of technology in the article would want.

For me, that makes it all the more monstrous, that a shabby simulation of thought will be used to oppress us all.

Indeed, that's precisely the sort of crass curve-fitting that's implied by the article's ridiculous claim that the system identifies "people who deviate from normal thought". It, of course, does no such thing. What it will do is cluster you, your attire, your hairstyle, your purchasing history, your web browsing habits, and decide whether or not you're likely to be an enemy of the state. The scary thing is that this will be, in it's own limited and cruel way, accurate.
posted by mondo dentro at 9:07 AM on December 15, 2017 [7 favorites]


But we should be super skeptical of claims like that about new technologies -- a lot of the "scientific" forensic evidence that we accept in US criminal courts is pure nonsense, and many new tools will be as well.

This. In addition to being a totalitarian nightmare, this sounds like using polygraphs or psychics to me, which means that I don't think *anything* will protect citizens subjected to it. I'm betting whomever is in charge of it has a quota of 'criminals' to catch and doesn't mind tossing false positives in the hole as long as it keeps the metrics looking good.

I'd also bet money that if everyone figured out the right fashions, memes and whatnot to fool a system like this, the people running it would just tweak the model until it 'caught' the requisite number of dissidents anyway.
posted by mordax at 9:11 AM on December 15, 2017 [4 favorites]


It, of course, does no such thing. What it will do is cluster you, your attire, your hairstyle, your purchasing history, your web browsing habits, and decide whether or not you're likely to be an enemy of the state.

But wait -- there's more: your tweets, facebook associations, linkedin connections, emails, IMs, phone conversations, and of course, political activity. All of which taken together gives the governing body a perfect picture of whether you are their kind of citizen or not. Most of us here are probably not. You can hope the government thinks you're powerless to impede their agenda, and will not act against you, but you have absolutely no control over whether they will, or when they will.
posted by Kirth Gerson at 11:05 AM on December 15, 2017 [1 favorite]


mordax: I'm betting whomever is in charge of it has a quota of 'criminals' to catch and doesn't mind tossing false positives in the hole as long as it keeps the metrics looking good.

It truly is a Great Leap Forward in AI.
posted by clawsoon at 11:05 AM on December 15, 2017 [3 favorites]


Kirth Gerson: All of which taken together gives the governing body a perfect picture of whether you are their kind of citizen or not. Most of us here are probably not. You can hope the government thinks you're powerless to impede their agenda, and will not act against you, but you have absolutely no control over whether they will, or when they will.

New techniques of surveillance don't always have negative outcomes. Sometimes they really are used to catch nasty people whom all of us would like to see locked up. Sometimes they do really make us all safer. That's what happens when new surveillance techniques are in the hands of a truly liberal, truly democratic state.

But keeping the state that way (or getting it there in the first place): That's the struggle. And since undemocratic, illiberal power grows on itself and on the fear it creates, it's a never-ending struggle.
posted by clawsoon at 11:18 AM on December 15, 2017


Of course criminals fall into the category of people the government doesn't like. I have no reason to doubt the claims in the article that the software has snagged hundreds of them. That doesn't in any way reassure me that the US government isn't going to use this technology to identify political opponents.
posted by Kirth Gerson at 12:27 PM on December 15, 2017


As it happens the technology has already been unleashed on a population defined as insurgents by their ethnicity. It is happening: it's horrifying.

Buzzfeed: This Is What A 21st-Century Police State Really Looks Like
Buzzfeed: China Is Vacuuming Up DNA Samples From Xinjiang's Muslims
The Guardian: In China's far west the 'perfect police state' is emerging
The Economist: China invents the digital totalitarian state
posted by glasseyes at 1:07 PM on December 15, 2017 [6 favorites]


Wired's recent version: Inside China's Vast New Experiment in Social Ranking

All governments deploy all available methods against their political opponents. It's frequently attitudes within the bureaucracy more than laws that limit the "available" methods, like American soldiers will tortures Muslims but not Christians, the FBI will try to convince MLK to commit suicide but not Senator Ron Wyden, DHS will harass Laura Poitras at the border but not Chomsky, etc.

Anyone like Occupy Wall Street or Standing Rock protestors who threatens real social movements against unjust power structures becomes reclassified so that the bureaucracy wields less soft gloves. We've much less ideological or institutional resistance to abusing private information collected about individuals, so all this becomes easier for bureaucrats who deal only with information.

All this personal information we publish becomes a cultural pollutant that gives organizations power over us. We do need legislation to make collecting information a liability not an asset. In essence, these liabilities would acts as "building codes" to de facto compel online services to be designed never to share anything except with those we intend to receive it.

There is one minor hiccup thought: We actually haven't shown that "building that won't fall down can be built" in the software world. We can do encrypted messengers and some now work fairly well, ala Signal, Wire, or even WhatsApp. We even know that mix nets can protect metadata at some cost in latency. It's much harder to do this semi-public sharing like facebook does with event invites and huge groups, or to do many sharing economy sites both privately and safely.
posted by jeffburdges at 6:13 PM on December 15, 2017 [4 favorites]


On the bright side, pretty much all currently existing face recognition systems are prone to adversarial examples. You can perturb inputs in ways that are imperceptible to a human, or look innocuous, and yet cause the system to completely choke. A group of researchers at CMU, for instance, were able to print out some weird rainbow tortoiseshell glasses, with a pattern carefully optimized to convince the neural net that the grad student wearing them was actually Scarlett Johansson.
posted by vogon_poet at 10:46 PM on December 15, 2017 [1 favorite]


Like, these systems are a grave threat and need to be torn down and destroyed, but they are nowhere near flawless. And the depraved government contractors trying to sell them have an interest in exaggerating how omniscient and all-powerful their models can be.
posted by vogon_poet at 10:49 PM on December 15, 2017


There is no reason they need to actually work correctly to be cause enormous social problems. You can silence dissent by creating fear even if you're only occasionally correct. Worse, this social ranking scheme could gamify being a well behaved citizen similarly to how coupons, or airplane miles gamifies being a good customer, or money gamifies being a productive member of society. We'll happily play the game even if it's runs poorly designed so long as it presents the right image. We survived with racism, sexism, etc. throughout human history, so a few new social metrics flawed in new ways will not slow us down.
posted by jeffburdges at 11:19 PM on December 15, 2017 [6 favorites]


There is no reason they need to actually work correctly to be cause enormous social problems.

That's for sure. I mean, for centuries authoritarians have managed to destroy their opponents using much flimsier methods. Like, by accusing them of not believing in the "correct" interpretation of some mythic story written by long-dead scribes of an invisible sky god.

So, I'm pretty sure that phoney "data driven" methods for demonizing people will be effective. In fact, more effective, because so many will see them as "objective".
posted by mondo dentro at 9:39 AM on December 16, 2017 [3 favorites]


We need people to understand that machine learning algorithms rarely yield answers with sound statistical properties, not because people cannot answer similar questions with controlled experiments and proper statistics, but because the machines learning algorithms were designed on the cheap to answer numerous such questions quickly. It's closer to your dog learning words like "walk" so that they become excited and try to sell you a new pair of running shoes.

Advertisers were spending the money anyways so if you can promise them slightly better results then they'll pay more for fewer impressions, but these approaches are no way to govern people's social opportunities, run a criminal justice system, or..

The NSA’s SKYNET program may be killing thousands of innocent people
"Ridiculously optimistic" machine learning algorithm is "completely bullshit," says expert.

Al Jazeera’s Islamabad bureau chief Ahmad Muaffaq Zaidan was labeled as a member of Al Qaeda by the NSA's metadata analysis. Worse, the NSA judged this deeply wrong assessment a "success" because he'd actually met some Al Qaeda members.
posted by jeffburdges at 2:12 PM on December 16, 2017 [5 favorites]


Wow, holy shit, that's some bad machine learning. If you did that on an undergrad class project you'd get a poor grade. They're making mistakes that get pointed out as mistakes in like chapter 2 of every textbook for teaching this stuff.

"Mistakes", I guess. It's probably not actually a bad outcome as far as they're concerned, to murder innocent Pakistanis.

I guess my point is there's two fears here. Fear one is that the authorities are going to use statistical models to justify doing what they wanted to do anyway. This is already here and getting worse.

Fear two, though, is they're actually going to have powerful statistical models that can like track people everywhere and somehow accurately detect dissident thought just based on patterns of interaction in the world. I think these are bullshit right now, and to the extent they get implemented will always be easily gameable.
posted by vogon_poet at 3:06 PM on December 16, 2017


We know people will game the system by stamping out dissident thought in themselves and their families, even if the system never identifies actual dissidents with better than random chance. It's exactly like they game the system now by becoming, and pushing their kids to become, advertisers, stock brokers, corporate lawyers, etc. instead of doing more fun, useful, and fulfilling jobs like teaching, engineering, research, etc.
posted by jeffburdges at 4:23 PM on December 16, 2017 [1 favorite]


I think my fears are well represented by this pulp SciFi story from 1954: Service Call.
posted by runcifex at 7:05 PM on December 16, 2017


It's probably not actually a bad outcome as far as they're concerned, to murder innocent Pakistanis.

On the other hand, if it targeted an actual bad person like Erik Prince, they would definitely consider that a bad outcome.
posted by Kirth Gerson at 6:03 AM on December 17, 2017 [2 favorites]






« Older Hey Alma!   |   When We Fight, We Have Our Children With Us Newer »


This thread has been archived and is closed to new comments