Blog

The Persuasion Architecture of the Attention Economy

It’s a broad focus, but one which is necessary now.  Zucked by Roger McNamee.  Just came out.  This guy was with ol’ Zuck at the beginning.  One of those gradual disillusionment stories.  Let us eliminate from consideration the prospect that Zuckerberg was overtly malevolent in crafting and implementing his designs for his “platform”.  So, what was it?  In the review of the book by Tom Bissell to be found in the NYT today, there was a telling sentence in the comments section appended this morning:  Facebook “leaves the faceless mob in charge of your once independent thinking.”  But surely this explanation is not a comprehensive one.  How did the mind give up its control to the faceless mob?  This intertwines with the story of the rise of Rationalism in the 17th century.  Then, superstition reigned under the protective umbrella of the Catholic  Church.  To put forward the opinion that the earth moved around the sun, in the wrong company, could get you burned as a heretic.  So, everything had to be put to the test of rationality and logic to combat this madness.  This is individual rationality.  But in the 19th century individual rationality began to be replaced by technological rationality.  It was the dictates of the device–machines, automobiles, that reconfigured the relationship between the individual and his or her environment.  The dictates of the device took precedence over the dictates of individual conscience and logic.

Zuckerberg and his ilk, then, have managed to “soften up” the mind of everyman and everywoman through the relentless implementation of the Fogg Behavior Model and related techniques, with social consciousness at its heart.  Everyone wants to belong.  The costs of this belonging don’t seem to be given their proper weight by most people.    “Simply stating what options are more popular, or merely preselecting default choices, is often enough to influence decision…”  we call this Status-quo bias.  In this way social deviance is pushed into a disreputable corner.  There’s something wrong with you if you don’t identify with the masses, or so goes this grotesque reasoning.  Our democratic system authorizes this. And after all, one isn’t making anyone do anything they don’t want to do in the first place by implementing Fogg’s Persuasive Design methodology in our internet marketplace.   People need structure.  They need routine.  The flight from self demands it.  Nowadays the smartphone provides the template for our daily routines.  It orders our life in ways we couldn’t manage without it.  You have been trained, smartphone users.  Oh, there’s no real self anyway, so why put any faith in something that doesn’t exist?  The softening up.  Look at all those nice red dots popping up every 40 seconds, it feels so good, a mini-orgasmatron waiting to stimulate me every waking moment.  Wait till they find a way to electronically integrate this with the clitoris…the faceless mob has been in charge since time immemorial, and with the rise of democracy, it became the sovereign force in society.  Of course this force is wretched and stupid and can easily be manipulated by bad actors. We have found in the last years just how easily manipulable it is.

Technology and the Law of Unintended Consequences

I consider today an article that appeared in the New York Times on Jan 25.  One whole day ago.  It is by Cal Newport, “a computer scientist and author”.  He claims that “Steve Jobs would not approve” of the way we use the iPhone in 2019, contrasting the typical user of this moment in time with that of the user of 12 years ago.  This, after all, was the purple dawn of the smartphone in the West.  (I guess they had them in Korea and Japan a few years before that.)  This is not a coincidence, as it’s apparent to anyone who looks at the laws of social dynamics that such a device is a perfect fit for the thoroughly collectivist cultures in the Far East.

Newport claims that Jobs never saw what was coming.   Newport relates that in a speech Jobs gave introducing the iPhone, he characterized it as “the best iPod we’ve ever made.” Then Newport goes on to say that “he doesn’t dedicate any significant time to discussing the phone’s internet connectivity features until more than 30 minutes into the address.”  Oh, that proves it.  because he delayed speaking of the internet connectivity aspect of the iPhone until the metaphorical page 12 of Section One of the New York Times daily (“buried in the back pages”), that must mean that he had no idea that internet connectivity would be a significant aspect of the iPhone’s operation.  Could it not be instead that he wished to downplay this aspect of its performance?

But this is all from a man who could safely be said to occupy a place on the “Natürwissenschaft-Geisteswissenschaft spectrum” (my term) that favors the former and disfavors the latter.  For those unfamiliar with this concept, it refers to the age-old quarrel between the traditional spirit of science and that of poetry.   Wilhelm Dilthey recharacterized it in the 19th century as the quarrel between science and the humanities in general.  Geisteswissenschaft, humanities, Natürwissenschaft, hard science.  In undertaking a course of higher education, one typically chooses one path or the other.  It’s possible for one individual to take hard science courses and English literature courses, and many do.  But the predominant trajectory will involve one category or the other.  How many humanities courses did Mark Zuckerberg take?  Of course it all comes down to money.  Where is the money in majoring in English literature?  And then ask the question, how much money has B. J. Fogg made in the last 12 years?  (See the link to the Stanford Persuasive Technology Lab on this site for more information on B. J. Fogg and his Behavioral Model).

So Newport comes at the question from the perspective of Natürwissenschaft, certainly.  I argue that this creates a blind spot concerning the whole question of technology, masking off the wider implications of understanding the trajectory of the phenomenon in question.  Of course this also works in the other direction.  Critique must be unsparing as regards the whole Natürwissenschft-Geisteswissenschaft question.  But we are here concerned with the spectre of rampant technology.  Even Newport concedes that the average person is no longer the master of this technology but that the reverse has occurred.  It seems to me that the only effective antidiote would be to attempt to redress the balance which is so far in favor of Natürwissenschaft that the opposite tendency is all but ignored.

My overall reaction to this article by Mr. Newport is one of incredulity:  How could he not understand that the smartphone would develop in any other way than the way it did?  “Mr. Jobs seemed to understand the iPhone as something that would help us with a small number of activities–listening to music, placing calls, generating directions.  He didn’t seek to radically change the rhythm of users’ daily lives…[p]ractically speaking, to be a minimalist smartphone user means that you deploy this device for a small number of features that do things you value (and that the phone does particularly well), and then outside of these activities, put it away.”  The ludicrously unrealistic tenor of these remarks beggars belief.  Basic concepts in psychology illustrate that this is pie-in-the sky thinking about the basic tendencies of a psychic organism that is contantly being molded into less and less autonomous configurations.  We are speaking here ultimately about compliant efficiency as a religious phenomenon.  The dictates of the operation of the apparatus determines our morality.  We”should” repond to the push notification immedately because the device makes it possible by redefining reality.  The red dot on the incoming email means urgency.  One can ignore it.  One can turn it to greyscale.  But the dictates of the apparatus insist that it is otherwise.

Newport goes on the make pathetically weak recommendations to combat the smartphone’s march to ubiquity in the psyche of the captured user.  “[I]f your work doesn’t absolutely demand that that you be accessible by email when away from your desk, delete the Gmail app or disconnect the built-in email client from your office servers.  It’s occasionally convenient to check in when out and about, but this occasional convenience almost always comes at the cost of developing a compulsive urge to monitor your messages constantly (emphasis mine–dw)”.  Now, suddenly, Newport understands what psychic forces are in play here.  This phenomenon, call it FOMO or operant conditioning or whatever, which is far stronger than most people realized before 2007 in the minds of those who are most susceptible to it (which is 90% of the population it seems), necessitates one solution and one solution only, if the desired outcome is some moderate measure of autonomy in this increasingly mechanized social environment–ditching the smartphone entirely.

Email Like a Robot

I reference in this essay the John Herrman article which appeared in the New York Times dated Nov. 7, 2018.

All you gmail users out there already know what I’m talking about.  Smart Reply, Smart Compose.  Smart Compose is the new one.  Say you type in “Have a look…” to one of your co-workers concerning your new widget design.  Smart Compose will offer a way to end the sentence:  “…and let me know what you think”, which appears in a fainter typescript symbolic of its tentative status.  They’re not imposing, just suggesting!  What could be wrong with that?  This technology is so helpful, and yet I feel like something’s being taken away from me…and then guilty for not being able to appreciate this wonderful helpfulness.

The AI in Smart Compose is learning how your individual mind works.  But at the same time it collectivizes thought.  If more people use a certain phrasing than another, the one used more will come up first as the pertinent suggestion.  This is especially efficacious in situations where learned performance trumps expression of will, as in work communications.  Smart Compose is best at exploiting ways we’ve already been programmed, by work, social convention, and the tools we use in the modern world to communicate.  The question of responsibility arises–are you responsible for text that is computer-generated automatically in your name?  One can call this human self-automation.  What fun!

Email thus becomes the backbone of a mosaic of components in which a general routinization of communicative strategies reaches for a predominant position in human thought.

This reminds me of Gustave Flaubert’s Dictionary of Received Ideas.  This is a little compendium of terms, and how people think about them, which was the inspiration for Flaubert’s novel  Bouvard and Pécuchet (1880), and predated it by as much as thirty years, which he conceived of as illustrating “the all-pervasive power of stupidity”, as A.J. Krailsheimer puts it in his Introduction to the Penguin Classics edition.  Example entries: AMBITION,  “Always ‘insane’ unless it is ‘noble'”.  ARISTOCRACY, “Despise and envy it.” BASES (OF SOCIETY), “Id est property, the family, religion, respect for authority.  Show anger if these are attacked.” BLACK: “Always followed by ‘as ebony’ or preceded by ‘jet.'”  C through Z is just as disheartening.  Routinized thought is as old as the hills.  But we are giving it a new dimension.

 

City of Surveillance

This is the title of an October 23, 2018 article to be found in The Guardian UK.  It refers to Toronto, Ontario, which is now negotiating with Sidewalk Labs, Google’s sister company, to implement the latest iteration of the “smart city”.  Toronto’s waterfront neighborhood Quayside is the gunea pig.  As Gabrielle Canon, author of the Guardian article, states, “Despite Justin Trudeau’s exclamation that, through a partnership with Google’sister company Sidewalk Labs, the waterfront neighborhood could help turn the area into a ‘thriving hub for innovation’, questions immediately arose over how the new wired town would collect and protect data.  A year into the project, those questions have resurfaced following the resignation of a privacy expert, Dr. Ann Cavoukian, who claimed she left her consulting role on the initiative to ‘send a strong statement’ about the data privacy issues the project still faces.”

Canon writes a bit further on in the article that “[a]fter initially being told that the data collected would be wiped and unidentifiable, Cavoukian told reporters she learned during a meeting last week that third parties could access identifiable information gathered in the district.”  For Cavoukian, this crossed the line.   She told the Global News, “When I heard that, I said: “I’m sorry.  I can’t support this.  I have to resign because you committed to embedding privacy by design into every aspect of your operation” [italics mine–dw].

Canon continues by citing other worrying examples of the shell game being utilized by Big Tech.  Saaida Muzaffar of TechGirlsCanada stepped down from its digital strategy advisory panel, announing that TechGirlsCanada was not adequately addressing privacy issues.

The response by the representatives of Big Tech were predictable.  Alyssa Harvey Dawson, Sidewalk Labs’ head of data governance, asserted that the Quayside project would “set a new model for responsible data used in cities.  The launch of Sidewalk Toronto sparked an active and healthy public discussion about data privacy, ownership, and governance.”  In a summary of the latest draft of the proposal, she recommends that the data collected in the city should be controlled and held by an independent civic data trust and that “all entities proposing to collect or use urban data (including Sidewalk Labs) will have to file a Responsible Data Impact Assessment with the Data Trust that is publicly available and reviewable.” But is this enough to ensure that privacy by design is embedded into every aspect of the system?  Jathan Sadowski, a lecturer on the ethics of technology, does not seem to be convinced of this.  He cautions that ..,[b]uilding the smart urban future cannot also mean paving the way for tech billionaires to fulfill their dreams of ruling over cities.  If it does, that’s not a future we should want to live in.”  In response to concerns of critics, Sidewalk Labs has issued a statement, saying “At yesterday’s meeting of Waterfront Toronto’s Digital Strategy Advisory Panel, it became clear that Sidewalk Labs would play a more limited role in near-term discussions about a data governance framework at Quayside.  Sidewalk Labs has committed to implement, as a company, the principles of Privacy by Design.  Though that question is settled, the question of whether other companies involved in the Quayside project would be required to do so is unlikely to be worked out soon and may be out of Sidewalk Labs’ hands.  For these reasons and others, Dr. Cavoukian has decided that it does not make sense to continue working as a paid consultant for Sidewalk Labs.  Sidewalk Labs benefited greatly from her advice, which helped the company formulate the strict privacy policies it has adopted, and looks forward to calling on her from time to time for her advice and feedback.”

I have quoted extensively from this article because I believe that this progression of events is likely going to constitute a paradigm for the advance of repressive technologies all over the world.  Initiatives will typically begin with a professed strong commitment to privacy.  Then little by little it will become clear that even though the original consortium may remain committed to these principles, it will not be able to guarantee that third parties, which always insert themselves somewhere along the line, will exercise that same commitment.

For her part, perhaps Dr. Cavoukian will now rethink the position she professes about the general principles of Privacy by Design.  She avers that privacy and the interests of large-scale business should not be viewed as zero sum, insisting that we can have strict privacy rules in place concomittant with an environment in which large-scale business practice proceeds without serious impediment.  The paradigm I have outlined here suggests otherwise.

See how it gathers, the technological storm, which is now upon us…

New Era Begins Wednesday

Freak-out at 2:18pm.  You can’t opt out.  It sets off a loud sound.  The first nationwide cellphone alert will occur at 2:18pm Eastern, tomorrow, Oct. 3, 2018.  This is only a test.  Had this been an actual emergency, scary looking men will comandeer your device, appear onscreen and scare the bejeezus out of you.  The intent is to use these alerts “only for advance warning of national crises” according to CBS News today.  “This is something that should not be used for a political agenda”, Jeh Johnson, former Secretary of Homeland Security warned.  They have the laws and protocols in place to avoid this, Johnson said.  “Should not be used?” The weakness of this assurance from a former high official in the United States Government is staggering.  And indeed, the reader may remember at this juncture what happened in Hawaii this past January, in which over a million people, virtually the entire population of Hawaii, received a seemingly official  alert on their phones that nuclear missiles were heading for them.  A member of the faculty of Columbia University, Andy Whitehouse, had the temerity to raise of couple of issues.  “The fact that you can’t turn this off, that it will be something that will arrive on your phone whether you like it or not, I think was perhaps upsetting and concerning to some people,” he is quoted as saying. I ask the reader to think about what these devices, and the technological society as a whole, are shaping up to be. More and more, elective uses are going to be supplemented by nonelective ones.  The seeds are being sown in our grand social engineering experiment.

“We’re Just Cold People”

The new memoir by Steve Jobs’ daughter, Lisa Brennan-Jobs, titled Small Fry.  The last vestiges of the thin veneer of counter-culture dissention from what used to be called “straight” culture are here stripped away.  It will be released to the general public on Sept. 4.  Excerpts have been leaked to Vanity Fair and other outlets.  The buzz now is about that already famous line, uttered by Lisa’s stepmother, Laurene Powell-Jobs, at a therapy session, when Lisa was a teenager, at which were present Steve, Laurene, and Lisa.  Ms. Brennan-Jobs cried at one point and says to the therapist she feels lonely and has wanted her parents to say good night to her.  Ms. Powell-Jobs responds to the therapist:  “We’re just cold people.”

I want to link this chilling tale with some words and ideas written by Joseph Weizenbaum, whose seminal work Computer Power and Human Reason sounded the alarm concerning a new sociopolitical dynamic way back in 1976.  In those days, there was a good deal of resistance and skepticism about the burgeoning computer technology on the part of the general public.  But it was only a surface phenomenon, as the following account shows.  It tells the tale of the creation of ELIZA, a computer program devised by Mr. Weizenbaum, which factored the response characteristics of a Rogerian psychotherapist, as I have related in these pages before.  In the article “Silicon Valley is not Your Friend”, Norm Cohen traces the early development of this trend:

“Interactions between people and their computers were always going to be confusing, and that confusion would be easy for programmers to exploit. John McCarthy, the computer-science pioneer who nurtured the first hackers at M.I.T. and later ran Stanford’s artificial intelligence lab, worried that programmers didn’t understand their responsibilities. ‘Computers will end up with the psychology that is convenient to their designers (and they’ll be fascist bastards if those designers don’t think twice),” he wrote in 1983. “Program designers have a tendency to think of the users as idiots who need to be controlled. They should rather think of their program as a servant, whose master, the user, should be able to control it.’

“Call it the Eliza problem. In 1966, Joseph Weizenbaum, a professor at M.I.T., unveiled a computer program, Eliza, which imitated a psychotherapist. It would, by rote, inquire about your feelings toward your parents or try to get you talking by rephrasing what you said in the form of a question. The program immediately touched a nerve, becoming a national phenomenon, to the surprise of Mr. Weizenbaum. For example, The New York Times swooned: “Computer is Being Taught to Understand English” . Eliza understood nothing, in truth, and could never reach any shared insight with a ‘patient.’ Eliza mechanically responded to whatever appeared on the screen. A typical therapy session quickly devolved into a Monty Python sketch. (Patient: You are not very aggressive, but I think you don’t want me to notice that. Eliza: What makes you think I am not very aggressive? Patient: You don’t argue with me. Eliza: Why do you think I don’t argue with you? Patient: You are afraid of me. Eliza: Does it please you to believe I am afraid of you?)

“Imagine Mr. Weizenbaum’s surprise when his secretary looked up from her computer and interrupted her exchanges with Eliza to say to him, ‘Would you mind leaving the room, please?’ She wanted privacy for a conversation with a machine! Mr. Weizenbaum, appalled, suddenly saw the potential for mischief by programmers who could manipulate computers and potentially the rest of us. He soon switched gears and devoted his remaining years to protesting what he considered the amorality of his computer science peers, frequently referring to his experiences as a young refugee from Nazi Germany.

One thought from the above passage stands out in a blinding flash of sickening realization:  Computers will end up with the characteristics of their designers as seen from a psychological standpoint.

Returning to the article by Norm Cohen:  “Neither Mr. Weizenbaum nor Mr. McCarthy mentioned, though it was hard to miss, that this ascendant generation were nearly all white men with a strong preference for people just like themselves. In a word, they were incorrigible, accustomed to total control of what appeared on their screens. ‘No playwright, no stage director, no emperor, however powerful,’ Mr. Weizenbaum wrote, ‘has ever exercised such absolute authority to arrange a stage or a field of battle and to command such unswervingly dutiful actors or troops.’”

Howe does such a mindset gain a foothold in the collective consciousness of society?  I believe it is instructive to point once again to Herbert Marcuse’s “Some Implications of Modern Technology” of 1941, which traces the course of a world-view that, developing wholly from within established rational frameworks, made it possible to pervert the rationalistically configured presuppositions animating the liberal autonomous individual into something approaching its opposite. I reproduce the relevant passage from this essay below:

“The principle of competitive efficiency favors the enterprises with the most highly mechanized and rationalized industrial equipment. Technological power tends to the concentration of economic power, to “large units of production, of vast corporate enterprises producing large quantities and often a striking variety of goods, of industrial empires owning and controlling materials, equipment, and processes from the extraction of raw materials to the distribution of finished products, of dominance over an entire industry by a small number of giant concerns. . . .” And technology “steadily increases the power at the command of giant concerns by creating new tools, processes and product. Efficiency here called for integral unification and simplification, for the removal of all “waste,” the avoidance of all detours, it called for radical coordination. A contradiction exists, however, between the profit incentive that keeps the apparatus moving and the rise of the standard of living which this same apparatus has made possible. “Since control of production is in the hands of enterprisers working for profit, thcy will have at their disposal whatever emerges as surplus after rent, interest, labor, and other costs are met. These costs will be kept at the lowest possible minimum as a matter of course.”‘ Under these circumstances, profitable employment of the apparatus dictates to a great extent the quantity, form and kind of commodities to be produced, and through this mode of production and distribution, the technological power of the apparatus affects the entire rationality of those whom it serves. Under the impact of this apparatus individualistic rationality has been transformed into technological rationality. It is by no means confined to the subjects and objects of large scale enterprises but characterizes the pervasive mode of thought and even the manifold forms of protest and rebellion [italics mine-dw]. This rationality establishes standards of judgment and fosters attitudes which make men ready to accept and even to introcept the dictates of the apparatus.”

To my mind, this last phrase in the last sentence quoted above is key–“attitudes which make men ready to accept and even to introcept the dictates of the apparatus”, coupled with the idea that computers will end up with the characteristics that are convenient to their designers, designers that have thoroughly introcepted what might be called scientistic presuppositions, ones which all but eliminate affect and privilege cold rationality, can only have an impact that replicates these features in the general populace. We are led down a pathway in this examination that leads to many others. One of them implied in the above quote is that traced by Jacques Ellul, whose The Technological Society of 1954 undertakes a deep examination of this drive for efficiency in the context of technological development. Efficiency is expressed in optimally rationalized action, and the implications for a mindset which privileges, nay reifies, efficiency over all other concerns are profound. Computers will end up with the characteristics of their designers, and from there, these characteristics are passed on to the end-user on a scale unseen in human history. A key point in Ellul’s critique is that this “technological imperative” has been allowed to achieve full autonomy, that all other concerns are subsumed into itself, which subordinates ends to means and is fully independent of any specific form of social organization, contrary to what Marcuse and, as well, by most leftist critics of the technocracy would have us believe. Technique, in this reading, follows a protocol that proceeds without regard to context: “The primary aspect of autonomy is perfectly expressed by Frederick Winslow Taylor, a leading technician. He takes, as his point of departure, the view that the industrial plant is a whole in itself, a ‘closed organism’, an end in itself…the complete separation of the goal from the mechanism, the limitation of the problem to the means, and the refusal to interfere in any way with efficiency, all this is clearly expressed by Taylor and lies at the basis of technical automony.”

Considering the totalizing nature of this phenomenon, it is easy to see how the cohort which invests in it could grow almost without limit, given the rise of the new type of personality so memorably voiced by Ms. Laurene Powell-Jobs in the quote cited above.  If one accepts the basic argument Marcuse puts forward in “Some Implications of Modern Technology”, especially that this technological rationality “make[s] men ready to accept and even to introcept the dictates of the apparatus”, one begins to grasp the enormity of the problem.   Ellul and Marcuse convincingly demonstrate that this technological rationality operates at the level of the deeply held philosophical conviction.  This is nothing short of an ideological imperative, an ethos of living, which dovetails into the mechanistic processes of “persuasive design” to encourage the rise of “the cold people” and marginalize the rest of us.

Deception by Design

I will be referring here to a study undertaken by the Norwegian Consumer Council or Forbruker Radet.  We are speaking here of the subtle and not-so-subtle techniques that the big gatekeepers (Google, Microsoft, Facebook) employ to “nudge” users into giving up more privacy than they otherwise might.  What I really want to point the reader towards, however, are the wider implications of this engineering, beyond privacy questions strictly speaking, into the Marcusean realms of compliant efficiency and technological rationality, which I have discussed in some depth in previous essays.  The subheading to this study, titled “Deceived by Design”, “How tech companies use dark patterns to discourage us from exercising our rights to privacy”, employs the neologism “dark patterns”, a term only 8 years old, and can be defined as a user interface that has been carefully crafted to trick users into doing various things not necessarily in their best interest. This employment of deception stems from the monetization of data.  From the report:

“Because many digital service providers make their money from the
accumulation of data, they have a strong incentive to make users share as much
information as possible. On the other hand, users may not have knowledge
about how their personal data is used, and what consequences this data
collection could have over time. If a service provider wants to collect as much
personal data as possible, and the user cannot see the consequences of this
data collection, there is an information asymmetry, in which the service provider
holds considerable power over the user.”

One of the most salient techniques is “nudging”.  Again, from the report:

“The concept of nudging comes from the fields of behavioural economy and
psychology, and describes how users can be led toward making certain choices
by appealing to psychological biases. Rather than making decisions based on
rationality, individuals have a tendency to be influenced by a variety of cognitive
biases, often without being aware of it.  For example, individuals have a
tendency to choose smaller short-term rewards, rather than larger long-term gains (hyberbolic discounting), and prefer choices and information that confirm
our pre-existing beliefs (confirmation bias). Interface designers who are aware of these biases can use this knowledge to effectively nudge users into making particular choices.  In digital services, design of user interfaces is in many ways even more important than the words used.  The psychology behind nudging can also be used exploitatively, to direct the user to actions that benefit the service provider, but which may not be in the user’s interests. This can be done in various ways, such as by obscuring the full price of a product, using confusing language, or by switching the placement of certain functions contrary to user expectations…[T]he information asymmetry in many digital services becomes particularly large because most users cannot accurately ascertain the risks of
exposing their privacy.”

In the case of privacy settings, the basic fact is that most users do not go into the menus of Amazon, Google, or Facebook to change what is set up by default.  This can only be because there is an implicit trust in the service provider on the part of the user.   This trust is abrogated as a matter of course.  Time and again the service providers show their true colors by making it difficult for the user to maximize privacy.  The study offers the follolwing example:

“The Facebook GDPR [General Data Protection Regulation] popup requires users to
go into “Manage data settings” to turn off ads based on data from third parties.
If the user simply clicks ‘Accept and continue’, the setting is automatically
turned on.  This is not privacy by default…Facebook and Google both have default settings preselected to the least privacy friendly options.”

I think that I have conveyed the general idea of the problem.  While this study does not reveal any techniques that have not been previously known, to grasp just how it all works in the macro sense is something that encounters considerable resistance on the part of the average user.

Franklin Foer, in his review of the latest Jaron Lanier book Ten Reasons to Delete Your Social Media Accounts Immediately, makes reference to the “Facebook manipulation machine.”  The practices outlined in the Norwegian study make it clear that this manipulation machine is not limited to Facebook.  It is ingrained in the protocols that one encounters nearly every step of the way as one navigates the internet. As such, it functions as as a giant behavior modification apparatus, acclimating the user to a new cognitive paradigm, one in which user autonomy is progressively degraded, arguably on the way to its eventual elimination. I ask the reader of this article to keep in mind that this manipulation machine is in its nascent stages.  Look how far it has come in just the eight years since the term “dark patterns” was coined.  What will the state of affairs be in ten, twenty, a hundred more years?