'Bots', micro segmentation of the
internet population according to their personality and interests,
messages tailored to each profile, filtering bubbles, fake news,
targeted propaganda - all based on sophisticated artificial
intelligence
algorithms - serve to orchestrate our political sentiment. A weapon
that, according to many, threatens the foundations of democracy.
No, the European Commission (EC) does
not spend 80 million euros a year on alcoholic beverages. Nor is
it true that
Podemos (a
leftist political party in Spain) propose that illegal immigrants
collect 1,200 euros
a month, nor that
PP (a
center right political party in Spain) senators applaud the halt to the
pension hike.
These are just some of the hoaxes that serve to agitate the mood on the
eve of the elections. To deny fake news, or false news, as those cited
above, is the responsibility of
Maldita.es, a Spanish portal
belonging to the
IFCN (
International Alliance of Independent
Verifiers) from thirteen European countries.
With a view to the vote on 26 May for the European Parliament, they
have created the
factcheckeu.info
page, where anyone can check whether
certain news circulating on the networks is true or not. It is an issue
that gives the EC (
European Commission)
a headache, so much so that this year it has invested
5 million euros in combating fake or false news (three times more than
in 2018). Their
concern is not unfounded, since the parties of nationalist and
anti-European extreme right are those that better exploit - in their
favor the - targeted propaganda. That is, the use of artificial
intelligence algorithms to determine user profiles and to create
messages that point to the Achilles heel of each of them. If not, ask
the Americans, or the Italians, or the English with their Brexit.
Everything
points to Steve Bannon, the former director of Donald Trump's social
media campaign, having landed in Europe determined to make a
name for himself. And that's because his digital and populist
strategies work. After leaving his grain of sand in the United States,
he helped the conservative Matteo Salvini, leader of the
Northern
League, ascend in the Italian elections. In Spain, he has
catapulted
Vox (a new and extreme right
poliical party) to stardom, which, as of May 2019, has more than
twice as many followers in
Instagram
than
Podemos (a leftist
political party in Spain) and four times as
many as the
PP (a center
right political party in Spain).
'
The Movement', in Brussels,
and '
Dignitatis Humanae', in
Rome,
are the European headquarters of Bannon's two think tanks to manipulate
voting intention thanks to micro-targeting techniques, which allow
citizens to be segmented by ideological profiles, gender, interests,
location or conduct on the Net. The key is to direct poisoned darts at
the audiences that are going to react the most; the more they are
going to react to incendiary tweets the more they amplify their effect
and
advertising to the undecided sectors. In the purest Trumpian style.
Another front the EC (
European
Commission) fears is Russia. And it's not a conspiracy theory
or a James Bond movie, no. A document published in February by the
EU
Council of Ministers states that "defensive efforts should be
directed
at Russian sources, which are increasingly deploying disinformation
strategies".
In 2016, the Soviet campaign to boycott the U.S. electoral
process reached 125 million users on
Facebook
alone, according to the
report.
Google and
Twitter were their other two
targets. But they weren't
randomly fired messages; they were tailored to specific targets while
being
invisible to the general public. "Individual feelings about political
ideas or candidates are often very impressionable and therefore easy to
manipulate," say Dipayan Ghosh and Ben Scott, researchers in the
Digital Deceit report
published last year.
The targeted propaganda is based on the same artificial intelligence
techniques used by personalized on-line marketing. But what happens
when it's not about selling cars, but instead selling election
candidates? "Digital
advertising tools are perfectly legal. All those involved benefit
economically in this ecosystem. They have developed brilliant
strategies of active persuasion. But they have also opened the door to
abuses that can harm the public interest and political culture,
weakening the integrity of democracy," warns Ghosh. That is to say, the
worst thing that can happen to you is no longer that you are influenced
to buy a certain brand of car, but that you are manipulated in your
political vision... and in your vote.
In 2017, Stephen Paddock killed 58
people and injured 851 others in the largest mass murder committed by a
single individual
in the United States. When, the next morning, the citizens wanted to
expand the news on Google, they came across several web sites in the
first positions of the search results that described Paddock as a
liberal and
anti-Trump sympathizer. In addition, they claimed that the FBI had
disclosed his connection to ISIS. But it was all fake. The strategy
sought to underpin Trump's popularity and foster fear of terrorist
attacks.
4Chan, a chain
specializing in disinformation, had spent the
entire night working to pin the blame for the massacre on the
Democrats. They got their false news to bypass the search engine's
results selection system when someone Googled the killer's name.
Surprising as it may seem, disinformation is protected by freedom of
expression, which is why it is so difficult to stop. What can be
stopped are techniques like the one that worked in the Paddock news.
'Black Hat SEO' is designed to
trick
Google's search
algorithm for a
few
hours before it can correct the distortion. It's a critical weapon in
the arsenal of targeted propaganda. Search results on topical issues
play a key role in shaping public opinion. That's why their
manipulation is a danger to the integrity of the political debate,"
denounces Ghosh, who was also a technology advisor at the White House
during Barack Obama's term. For example, when you want to find out what
a particular candidate said in his recent public appearance and you
Google his name,
'Black Hat SEO' can make that
the first positions of results be of pages with
lies to discredit the character in question.
Imagine that this doesn't just happen to you, but to millions of other
people at the same time. Suppose, moreover, that deceptive news is so
plausible and attractive that you not only believe it, but share it and
tweet it to your friends. That's multiplied by millions. In less than
it takes you to take a nap, it's already gone viral. The shock wave is
so immense, to begin with, because
Google
is the information search
method used by 85% of the world's Internet users. And because the top
five search results take 75% of the traffic on the Net. And the first,
95%.
More sophisticated is another tool that earned Obama a landslide
victory in 2012, Trump a landslide victory in 2016, French President
Emmanuel Macron a landslide victory in 2017, supporters to the Brexit
campaign and President Uhuru Kenyatta in Kenya a landslide
victory in 2017. We are talking about
Social
Media Management Software
(SMMS), which determines which groups of people are best suited to
address them.
After collecting personal data from millions of users,
either through their purchase in the information market or harvested
online, this software segments the population to decide how to show
them a particular message.
This is what happened in Trump's campaign: "Eleven very refined
versions of each candidate's phrase were made, for
psychologically different profiles. When I know your personality, I
know your fears, and that's the key, that's where the brainwashing
starts. This is called populism, not democracy. It's adjusting my talk
to what you want to hear," says Martin Hilbert, professor of
communications at the
University of
California and technology advisor
to the
U.S. Library of Congress.
In the same vein, the so-called filter
bubble identifies the part of the electoral program with which you
might agree, and bombard you with that idea alone. This is what
was done with great success in Obama's campaign, as Hilbert tells us,
where they created a database of 16 million undecided voters, in order
to send them tailor-made propaganda and win them over for their cause.
"You could disagree with his political program by 90%, and agree with
only one of his electoral promises. If they show you all the time - on
your
Facebook feed, on
Twitter, etc.. - you end up
thinking, 'Look,
Obama is so good. The filter bubble is so powerful that they changed
the opinion of 80% of the people who were attacked in this way. That's
how
Obama won the election," Hilbert emphasizes.
The cherry on the cake is that the social networking services also help
fine-tune
the messages.
"For example, if a lot of people started tweeting
negative feelings about a comment made by Hillary Clinton, the
SMMS
would direct its pro-Trump propaganda to those users," Ghosh explains.
Key here is the role of artificial intelligence algorithms that make
complex decisions in real time to determine what kind of content they
send to what segment of the population.
Those algorithms know you better
than your own mother. The psychometric expert Michal Kosinski,
now a researcher at
Stanford
University, is well aware of this. One
day, when he was working at Cambridge, he decided to check how much you
could know about a person's psychological profile by studying their
activity on
Facebook,
specifically, the entries that they clicked on 'I
like it'.
He experimented with millions of volunteers: he gave them
psychological tests and studied their behavior on the social network.
From there, he created increasingly refined drafts of an artificial
intelligence algorithm capable of making an X-ray of your personality
and behavioral patterns,
just by having access to your
Facebook
page.
With 68 likes, Kosinski showed that his program could predict a
person's personality with reasonable certainty, including a person's
skin color - with 95% success -, their sexual inclination - 88% - or
their political affiliation - 85% . Not satisfied with that, he found
that with 150 'I like it', the algorithm could deduce how someone was
better than his own parents. That included his most intimate needs and
fears and how he was expected to behave.
The young Kosinski immediately sensed the danger that this tool could
have in the wrong hands. If someone was able to get to know everyone in
the ocean of
Facebook users
so deeply, they could use that knowledge to
direct them to fine-tuned persuasion techniques... or to persecute
homosexuals, liberal Arab women, dissidents in totalitarian regimes...
etc.
His findings could pose a threat to the well-being of an individual,
his freedom or even his life - Kosinski himself began to notice in his
scientific publications. And that is precisely what happened, at least
as far as freedom of thought is concerned.
Although this researcher was not willing to sell his algorithmic
program for commercial or political purposes, his collaborator
Aleksandr Kogan, who signed an agreement in 2014 with the British
company
Cambridge Analytica
to provide them with a similar program,
did - according to
The Guardian.
To do so, they collected from
Facebook
a
database of millions of U.S. voters, without warning as to the purpose
for which they were going to be used. The plot was not discovered until
2017, somewhat late, because the invention had already been
successfully tested by
Cambridge
Analytica in two of the most famous
campaigns for which it has been hired for: the presidential campaign of
Trump and that of Brexit in the United Kingdom.
"I have the honor to speak to
you
today about the power of big data and psychometrics in electoral
processes," announced its brand new CEO, Alexander Nix, at a
conference at the
2016 Concordia
Summit, where he boasted of possessing
"a model to predict the personality of every adult in the United
States. In their databases, they had 220 million subjects catalogued in
32 personality profiles.
Another similar company, which also worked for Trump in the U.S.
elections, is
Harris Media LLC,
based in Texas. According to the civil
rights group
Privacy International
(PI),
Harris Media was hired
in the
Kenyan elections in August 2017 by the ruling party and used data from
social networks to target specific audiences. "We are concerned about
the role and responsibility of advisors working on political campaigns
in Kenya, where tribal affiliation and religion of origin are
politically sensitive data," the ONG (PI) denounced in a statement to
Reuters.
False news and trolls dominated public discussion and fueled tension
and ethnic clashes in the days leading up to the polls.
Then there are the 'bots'. In August 2017, Trump thanked one Nicole
Mincey
in a tweet for congratulating him on "working for the American people".
With 150,000 followers on
Twitter,
Mincey seemed to be, by her comments
and her photo, an African American follower of the Republican leader.
But it turned out that, in reality, it was nothing more than the avatar
of a computer program - just another 'bot'.
Trump himself acknowledged that he
wouldn't have won the election without Twitter. But, perhaps, it
was not because of the support of Internet users, but because of the
army of 'bots' that amplified his reach. These are accounts that do not
belong to real users: they are managed by software that mimics human
behavior - the 'bots'. In May 2017, a study by the
University of Georgia showed that
they can serve to spread political messages on a massive scale and turn
them into a trending topic in just a few hours, pretending that it is
real people who are sharing their feelings or opinions.
The Russian agency
Internet Research,
also known as
Kremlinbots or
Olgino Trolls, specializes in
these techniques. His 'bots' farm produced
and disseminated thousands of
Facebook
and
Instagram posts to
support
the blonde magnate's campaign in the last U.S. elections, as Wired
magazine reported in depth. In fact, in February 2018, the
U.S.
Department of Justice condemned this agency for having
interfered with
its political processes.
Similar methods are used in China and Russia to support the regime's
ideas or divert attention from sensitive issues. And they have come to
the forefront in many other countries, such as France, where they made
a
failed attempt to discredit Macron before the last elections.
Political 'bots' are very useful for spreading hundreds of thousands of
comments in support of a candidate or for harassing an opponent or
an angry activist with a flood of aggressive comments. "Their mission
is
to deceive the public, to make them believe that it is real people who
are
expressing themselves on the Internet and that their messages represent
the opinion of a majority of society," warn experts in the field
Renée DiResta, John Little, Jonathon Morgan, Lisa Maria Neudert
and Ben Nimmo, in an article published on
Motherboard.
To make matters
worse, the artificial intelligence program of these automats can also
include the ability to search for and detect individuals on the
Internet who
are related to a certain ideological line, in order to connect with
them and send them their propaganda, since these new human allies will
be more likely to spread it later - without knowing that they are being
manipulated by 'bots', of course.
When the apparently endless Brexit campaign began in the United
Kingdom, the
victory of the independentists was not so clear. What's more, the first
polls claimed that the proposal would not win: staying in the EU had
many
important defenders. But he didn't have a good targeted propaganda
company by his side. This is one of the subjects studied in depth by
Vyacheslav Polonsky, a researcher at the
Oxford Internet Institute, who
analyzed 28,000 entries on social networks and some 13,000 hashtags to
draw his conclusions.
"We realized that people skeptical of the EU and those who bet on
Brexit dominated the debate and were more effective in their use of
Instagram to activate and mobilize people across the country. They also
tended to be more passionate, active and extroverted in their online
behavior. On average they generated almost five posts per head more
than their opponents," he says.
AggregatIQ,
a Canadian subsidiary of
Cambridge Analytica, was hired
by Brexit advocates for the campaign.
His messages could reach up to seven million people, according to a
March article in the Spanish daily
El
Mundo, taking advantage of the niche of the
undecided and those who felt resentment toward immigration.
Another case that Polonsky has examined is the success of Macron's
online campaign in the Gallic elections of May 2017. "All interactions
with sympathizers were recorded and analyzed semantically using
advanced algorithms to extract keywords that resonated with voters.
These keywords were then used by Macron in his speeches, adapted to
different audiences and regions. The rallies were broadcast live on
Facebook, while a team of content creators assembled each tweet with an
artisan dedication," explains Polonsky.
So, there is no doubt that the new strategies of targeted propaganda
are here to stay. We find ourselves, in Ghosh's words, in "the age of
algorithmic disinformation". According to Polonsky, "we are witnessing
the dawn of a new frontier, where politics is war and big data is one
of the most powerful weapons in their arsenal. That implies that
whoever dominates the new weaponry will lead the political discourse
and hearts of the people in the new and dubious digital democracy.
Because, as this expert reminds us, "on
Facebook,
Twitter and
Instagram, everyone can speak,
but not everyone can be heard". And it's
no longer about having charisma or quality political discourse: the key
is to have the best algorithm experts on your side.
______________________
(1) Source: Muy Interesante magazine (Spain) - Número
456 - Mayo 2019
The article in Spanish:
"Los hilos que mueven tu voto" de Laura G. de Rivera, pp. 22 a 28.
(2)
The Establishment of a One
World/International Order