Email Like a Robot

I reference in this essay the John Herrman article which appeared in the New York Times dated Nov. 7, 2018.

All you gmail users out there already know what I’m talking about.  Smart Reply, Smart Compose.  Smart Compose is the new one.  Say you type in “Have a look…” to one of your co-workers concerning your new widget design.  Smart Compose will offer a way to end the sentence:  “…and let me know what you think”, which appears in a fainter typescript symbolic of its tentative status.  They’re not imposing, just suggesting!  What could be wrong with that?  This technology is so helpful, and yet I feel like something’s being taken away from me…and then guilty for not being able to appreciate this wonderful helpfulness.

The AI in Smart Compose is learning how your individual mind works.  But at the same time it collectivizes thought.  If more people use a certain phrasing than another, the one used more will come up first as the pertinent suggestion.  This is especially efficacious in situations where learned performance trumps expression of will, as in work communications.  Smart Compose is best at exploiting ways we’ve already been programmed, by work, social convention, and the tools we use in the modern world to communicate.  The question of responsibility arises–are you responsible for text that is computer-generated automatically in your name?  One can call this human self-automation.  What fun!

Email thus becomes the backbone of a mosaic of components in which a general routinization of communicative strategies reaches for a predominant position in human thought.

This reminds me of Gustave Flaubert’s Dictionary of Received Ideas.  This is a little compendium of terms, and how people think about them, which was the inspiration for Flaubert’s novel  Bouvard and Pécuchet (1880), and predated it by as much as thirty years, which he conceived of as illustrating “the all-pervasive power of stupidity”, as A.J. Krailsheimer puts it in his Introduction to the Penguin Classics edition.  Example entries: AMBITION,  “Always ‘insane’ unless it is ‘noble'”.  ARISTOCRACY, “Despise and envy it.” BASES (OF SOCIETY), “Id est property, the family, religion, respect for authority.  Show anger if these are attacked.” BLACK: “Always followed by ‘as ebony’ or preceded by ‘jet.'”  C through Z is just as disheartening.  Routinized thought is as old as the hills.  But we are giving it a new dimension.

 

City of Surveillance

This is the title of an October 23, 2018 article to be found in The Guardian UK.  It refers to Toronto, Ontario, which is now negotiating with Sidewalk Labs, Google’s sister company, to implement the latest iteration of the “smart city”.  Toronto’s waterfront neighborhood Quayside is the gunea pig.  As Gabrielle Canon, author of the Guardian article, states, “Despite Justin Trudeau’s exclamation that, through a partnership with Google’sister company Sidewalk Labs, the waterfront neighborhood could help turn the area into a ‘thriving hub for innovation’, questions immediately arose over how the new wired town would collect and protect data.  A year into the project, those questions have resurfaced following the resignation of a privacy expert, Dr. Ann Cavoukian, who claimed she left her consulting role on the initiative to ‘send a strong statement’ about the data privacy issues the project still faces.”

Canon writes a bit further on in the article that “[a]fter initially being told that the data collected would be wiped and unidentifiable, Cavoukian told reporters she learned during a meeting last week that third parties could access identifiable information gathered in the district.”  For Cavoukian, this crossed the line.   She told the Global News, “When I heard that, I said: “I’m sorry.  I can’t support this.  I have to resign because you committed to embedding privacy by design into every aspect of your operation” [italics mine–dw].

Canon continues by citing other worrying examples of the shell game being utilized by Big Tech.  Saaida Muzaffar of TechGirlsCanada stepped down from its digital strategy advisory panel, announing that TechGirlsCanada was not adequately addressing privacy issues.

The response by the representatives of Big Tech were predictable.  Alyssa Harvey Dawson, Sidewalk Labs’ head of data governance, asserted that the Quayside project would “set a new model for responsible data used in cities.  The launch of Sidewalk Toronto sparked an active and healthy public discussion about data privacy, ownership, and governance.”  In a summary of the latest draft of the proposal, she recommends that the data collected in the city should be controlled and held by an independent civic data trust and that “all entities proposing to collect or use urban data (including Sidewalk Labs) will have to file a Responsible Data Impact Assessment with the Data Trust that is publicly available and reviewable.” But is this enough to ensure that privacy by design is embedded into every aspect of the system?  Jathan Sadowski, a lecturer on the ethics of technology, does not seem to be convinced of this.  He cautions that ..,[b]uilding the smart urban future cannot also mean paving the way for tech billionaires to fulfill their dreams of ruling over cities.  If it does, that’s not a future we should want to live in.”  In response to concerns of critics, Sidewalk Labs has issued a statement, saying “At yesterday’s meeting of Waterfront Toronto’s Digital Strategy Advisory Panel, it became clear that Sidewalk Labs would play a more limited role in near-term discussions about a data governance framework at Quayside.  Sidewalk Labs has committed to implement, as a company, the principles of Privacy by Design.  Though that question is settled, the question of whether other companies involved in the Quayside project would be required to do so is unlikely to be worked out soon and may be out of Sidewalk Labs’ hands.  For these reasons and others, Dr. Cavoukian has decided that it does not make sense to continue working as a paid consultant for Sidewalk Labs.  Sidewalk Labs benefited greatly from her advice, which helped the company formulate the strict privacy policies it has adopted, and looks forward to calling on her from time to time for her advice and feedback.”

I have quoted extensively from this article because I believe that this progression of events is likely going to constitute a paradigm for the advance of repressive technologies all over the world.  Initiatives will typically begin with a professed strong commitment to privacy.  Then little by little it will become clear that even though the original consortium may remain committed to these principles, it will not be able to guarantee that third parties, which always insert themselves somewhere along the line, will exercise that same commitment.

For her part, perhaps Dr. Cavoukian will now rethink the position she professes about the general principles of Privacy by Design.  She avers that privacy and the interests of large-scale business should not be viewed as zero sum, insisting that we can have strict privacy rules in place concomittant with an environment in which large-scale business practice proceeds without serious impediment.  The paradigm I have outlined here suggests otherwise.

See how it gathers, the technological storm, which is now upon us…

New Era Begins Wednesday

Freak-out at 2:18pm.  You can’t opt out.  It sets off a loud sound.  The first nationwide cellphone alert will occur at 2:18pm Eastern, tomorrow, Oct. 3, 2018.  This is only a test.  Had this been an actual emergency, scary looking men will comandeer your device, appear onscreen and scare the bejeezus out of you.  The intent is to use these alerts “only for advance warning of national crises” according to CBS News today.  “This is something that should not be used for a political agenda”, Jeh Johnson, former Secretary of Homeland Security warned.  They have the laws and protocols in place to avoid this, Johnson said.  “Should not be used?” The weakness of this assurance from a former high official in the United States Government is staggering.  And indeed, the reader may remember at this juncture what happened in Hawaii this past January, in which over a million people, virtually the entire population of Hawaii, received a seemingly official  alert on their phones that nuclear missiles were heading for them.  A member of the faculty of Columbia University, Andy Whitehouse, had the temerity to raise of couple of issues.  “The fact that you can’t turn this off, that it will be something that will arrive on your phone whether you like it or not, I think was perhaps upsetting and concerning to some people,” he is quoted as saying. I ask the reader to think about what these devices, and the technological society as a whole, are shaping up to be. More and more, elective uses are going to be supplemented by nonelective ones.  The seeds are being sown in our grand social engineering experiment.

“We’re Just Cold People”

The new memoir by Steve Jobs’ daughter, Lisa Brennan-Jobs, titled Small Fry.  The last vestiges of the thin veneer of counter-culture dissention from what used to be called “straight” culture are here stripped away.  It will be released to the general public on Sept. 4.  Excerpts have been leaked to Vanity Fair and other outlets.  The buzz now is about that already famous line, uttered by Lisa’s stepmother, Laurene Powell-Jobs, at a therapy session, when Lisa was a teenager, at which were present Steve, Laurene, and Lisa.  Ms. Brennan-Jobs cried at one point and says to the therapist she feels lonely and has wanted her parents to say good night to her.  Ms. Powell-Jobs responds to the therapist:  “We’re just cold people.”

I want to link this chilling tale with some words and ideas written by Joseph Weizenbaum, whose seminal work Computer Power and Human Reason sounded the alarm concerning a new sociopolitical dynamic way back in 1976.  In those days, there was a good deal of resistance and skepticism about the burgeoning computer technology on the part of the general public.  But it was only a surface phenomenon, as the following account shows.  It tells the tale of the creation of ELIZA, a computer program devised by Mr. Weizenbaum, which factored the response characteristics of a Rogerian psychotherapist, as I have related in these pages before.  In the article “Silicon Valley is not Your Friend”, Norm Cohen traces the early development of this trend:

“Interactions between people and their computers were always going to be confusing, and that confusion would be easy for programmers to exploit. John McCarthy, the computer-science pioneer who nurtured the first hackers at M.I.T. and later ran Stanford’s artificial intelligence lab, worried that programmers didn’t understand their responsibilities. ‘Computers will end up with the psychology that is convenient to their designers (and they’ll be fascist bastards if those designers don’t think twice),” he wrote in 1983. “Program designers have a tendency to think of the users as idiots who need to be controlled. They should rather think of their program as a servant, whose master, the user, should be able to control it.’

“Call it the Eliza problem. In 1966, Joseph Weizenbaum, a professor at M.I.T., unveiled a computer program, Eliza, which imitated a psychotherapist. It would, by rote, inquire about your feelings toward your parents or try to get you talking by rephrasing what you said in the form of a question. The program immediately touched a nerve, becoming a national phenomenon, to the surprise of Mr. Weizenbaum. For example, The New York Times swooned: “Computer is Being Taught to Understand English” . Eliza understood nothing, in truth, and could never reach any shared insight with a ‘patient.’ Eliza mechanically responded to whatever appeared on the screen. A typical therapy session quickly devolved into a Monty Python sketch. (Patient: You are not very aggressive, but I think you don’t want me to notice that. Eliza: What makes you think I am not very aggressive? Patient: You don’t argue with me. Eliza: Why do you think I don’t argue with you? Patient: You are afraid of me. Eliza: Does it please you to believe I am afraid of you?)

“Imagine Mr. Weizenbaum’s surprise when his secretary looked up from her computer and interrupted her exchanges with Eliza to say to him, ‘Would you mind leaving the room, please?’ She wanted privacy for a conversation with a machine! Mr. Weizenbaum, appalled, suddenly saw the potential for mischief by programmers who could manipulate computers and potentially the rest of us. He soon switched gears and devoted his remaining years to protesting what he considered the amorality of his computer science peers, frequently referring to his experiences as a young refugee from Nazi Germany.

One thought from the above passage stands out in a blinding flash of sickening realization:  Computers will end up with the characteristics of their designers as seen from a psychological standpoint.

Returning to the article by Norm Cohen:  “Neither Mr. Weizenbaum nor Mr. McCarthy mentioned, though it was hard to miss, that this ascendant generation were nearly all white men with a strong preference for people just like themselves. In a word, they were incorrigible, accustomed to total control of what appeared on their screens. ‘No playwright, no stage director, no emperor, however powerful,’ Mr. Weizenbaum wrote, ‘has ever exercised such absolute authority to arrange a stage or a field of battle and to command such unswervingly dutiful actors or troops.’”

Howe does such a mindset gain a foothold in the collective consciousness of society?  I believe it is instructive to point once again to Herbert Marcuse’s “Some Implications of Modern Technology” of 1941, which traces the course of a world-view that, developing wholly from within established rational frameworks, made it possible to pervert the rationalistically configured presuppositions animating the liberal autonomous individual into something approaching its opposite. I reproduce the relevant passage from this essay below:

“The principle of competitive efficiency favors the enterprises with the most highly mechanized and rationalized industrial equipment. Technological power tends to the concentration of economic power, to “large units of production, of vast corporate enterprises producing large quantities and often a striking variety of goods, of industrial empires owning and controlling materials, equipment, and processes from the extraction of raw materials to the distribution of finished products, of dominance over an entire industry by a small number of giant concerns. . . .” And technology “steadily increases the power at the command of giant concerns by creating new tools, processes and product. Efficiency here called for integral unification and simplification, for the removal of all “waste,” the avoidance of all detours, it called for radical coordination. A contradiction exists, however, between the profit incentive that keeps the apparatus moving and the rise of the standard of living which this same apparatus has made possible. “Since control of production is in the hands of enterprisers working for profit, thcy will have at their disposal whatever emerges as surplus after rent, interest, labor, and other costs are met. These costs will be kept at the lowest possible minimum as a matter of course.”‘ Under these circumstances, profitable employment of the apparatus dictates to a great extent the quantity, form and kind of commodities to be produced, and through this mode of production and distribution, the technological power of the apparatus affects the entire rationality of those whom it serves. Under the impact of this apparatus individualistic rationality has been transformed into technological rationality. It is by no means confined to the subjects and objects of large scale enterprises but characterizes the pervasive mode of thought and even the manifold forms of protest and rebellion [italics mine-dw]. This rationality establishes standards of judgment and fosters attitudes which make men ready to accept and even to introcept the dictates of the apparatus.”

To my mind, this last phrase in the last sentence quoted above is key–“attitudes which make men ready to accept and even to introcept the dictates of the apparatus”, coupled with the idea that computers will end up with the characteristics that are convenient to their designers, designers that have thoroughly introcepted what might be called scientistic presuppositions, ones which all but eliminate affect and privilege cold rationality, can only have an impact that replicates these features in the general populace. We are led down a pathway in this examination that leads to many others. One of them implied in the above quote is that traced by Jacques Ellul, whose The Technological Society of 1954 undertakes a deep examination of this drive for efficiency in the context of technological development. Efficiency is expressed in optimally rationalized action, and the implications for a mindset which privileges, nay reifies, efficiency over all other concerns are profound. Computers will end up with the characteristics of their designers, and from there, these characteristics are passed on to the end-user on a scale unseen in human history. A key point in Ellul’s critique is that this “technological imperative” has been allowed to achieve full autonomy, that all other concerns are subsumed into itself, which subordinates ends to means and is fully independent of any specific form of social organization, contrary to what Marcuse and, as well, by most leftist critics of the technocracy would have us believe. Technique, in this reading, follows a protocol that proceeds without regard to context: “The primary aspect of autonomy is perfectly expressed by Frederick Winslow Taylor, a leading technician. He takes, as his point of departure, the view that the industrial plant is a whole in itself, a ‘closed organism’, an end in itself…the complete separation of the goal from the mechanism, the limitation of the problem to the means, and the refusal to interfere in any way with efficiency, all this is clearly expressed by Taylor and lies at the basis of technical automony.”

Considering the totalizing nature of this phenomenon, it is easy to see how the cohort which invests in it could grow almost without limit, given the rise of the new type of personality so memorably voiced by Ms. Laurene Powell-Jobs in the quote cited above.  If one accepts the basic argument Marcuse puts forward in “Some Implications of Modern Technology”, especially that this technological rationality “make[s] men ready to accept and even to introcept the dictates of the apparatus”, one begins to grasp the enormity of the problem.   Ellul and Marcuse convincingly demonstrate that this technological rationality operates at the level of the deeply held philosophical conviction.  This is nothing short of an ideological imperative, an ethos of living, which dovetails into the mechanistic processes of “persuasive design” to encourage the rise of “the cold people” and marginalize the rest of us.

Deception by Design

I will be referring here to a study undertaken by the Norwegian Consumer Council or Forbruker Radet.  We are speaking here of the subtle and not-so-subtle techniques that the big gatekeepers (Google, Microsoft, Facebook) employ to “nudge” users into giving up more privacy than they otherwise might.  What I really want to point the reader towards, however, are the wider implications of this engineering, beyond privacy questions strictly speaking, into the Marcusean realms of compliant efficiency and technological rationality, which I have discussed in some depth in previous essays.  The subheading to this study, titled “Deceived by Design”, “How tech companies use dark patterns to discourage us from exercising our rights to privacy”, employs the neologism “dark patterns”, a term only 8 years old, and can be defined as a user interface that has been carefully crafted to trick users into doing various things not necessarily in their best interest. This employment of deception stems from the monetization of data.  From the report:

“Because many digital service providers make their money from the
accumulation of data, they have a strong incentive to make users share as much
information as possible. On the other hand, users may not have knowledge
about how their personal data is used, and what consequences this data
collection could have over time. If a service provider wants to collect as much
personal data as possible, and the user cannot see the consequences of this
data collection, there is an information asymmetry, in which the service provider
holds considerable power over the user.”

One of the most salient techniques is “nudging”.  Again, from the report:

“The concept of nudging comes from the fields of behavioural economy and
psychology, and describes how users can be led toward making certain choices
by appealing to psychological biases. Rather than making decisions based on
rationality, individuals have a tendency to be influenced by a variety of cognitive
biases, often without being aware of it.  For example, individuals have a
tendency to choose smaller short-term rewards, rather than larger long-term gains (hyberbolic discounting), and prefer choices and information that confirm
our pre-existing beliefs (confirmation bias). Interface designers who are aware of these biases can use this knowledge to effectively nudge users into making particular choices.  In digital services, design of user interfaces is in many ways even more important than the words used.  The psychology behind nudging can also be used exploitatively, to direct the user to actions that benefit the service provider, but which may not be in the user’s interests. This can be done in various ways, such as by obscuring the full price of a product, using confusing language, or by switching the placement of certain functions contrary to user expectations…[T]he information asymmetry in many digital services becomes particularly large because most users cannot accurately ascertain the risks of
exposing their privacy.”

In the case of privacy settings, the basic fact is that most users do not go into the menus of Amazon, Google, or Facebook to change what is set up by default.  This can only be because there is an implicit trust in the service provider on the part of the user.   This trust is abrogated as a matter of course.  Time and again the service providers show their true colors by making it difficult for the user to maximize privacy.  The study offers the follolwing example:

“The Facebook GDPR [General Data Protection Regulation] popup requires users to
go into “Manage data settings” to turn off ads based on data from third parties.
If the user simply clicks ‘Accept and continue’, the setting is automatically
turned on.  This is not privacy by default…Facebook and Google both have default settings preselected to the least privacy friendly options.”

I think that I have conveyed the general idea of the problem.  While this study does not reveal any techniques that have not been previously known, to grasp just how it all works in the macro sense is something that encounters considerable resistance on the part of the average user.

Franklin Foer, in his review of the latest Jaron Lanier book Ten Reasons to Delete Your Social Media Accounts Immediately, makes reference to the “Facebook manipulation machine.”  The practices outlined in the Norwegian study make it clear that this manipulation machine is not limited to Facebook.  It is ingrained in the protocols that one encounters nearly every step of the way as one navigates the internet. As such, it functions as as a giant behavior modification apparatus, acclimating the user to a new cognitive paradigm, one in which user autonomy is progressively degraded, arguably on the way to its eventual elimination. I ask the reader of this article to keep in mind that this manipulation machine is in its nascent stages.  Look how far it has come in just the eight years since the term “dark patterns” was coined.  What will the state of affairs be in ten, twenty, a hundred more years?

 

 

The #Plane Bae saga

Rosey Blair’s opening of the portals of untruth, as in “the crowd is untruth”, underscores once again the perils and wretchedness of ubiquitous surveillance (often not carried out by governments or faceless megacorporations but ordinary citizens), coupled with the unwholesome tendencies toward exploitative voyeurism so many find unproblematic.

The online thread, which went viral of course, a twitter feed, documented without permission the so-called “budding romance” between two people getting to know each other on an airplane, for those of you not familiar with the story.

One article on this subject quotes someone named Taylor Lorenz, an internet culture reporter for the Atlantic.

“That’s one of the biggest problems with social media: It allows you to exploit the world and people around you to get attention for your own benefit,” Lorenz told the online periodical The Business Insider. “Everyone is just using each other for content. It encourages you to look at the world that way.”

For me this quote summarizes the problem we are now faced with as seen in its broad outlines.  It’s far more significant than that some one party got exploited one time.  One has to assume that everything one does in public can be treated in this way.  The moral:  Look behind you in the plane cabin, in the theatre, walking down the street, to see if anyone is spying on you with their smartphones.  Don’t say anything that the masses will pick up on to your detriment.  Know your enemy.  One must now contend with yet another layer of the intrusion of the herd mentality into lives this herd does not respect, which operates on the principle that 60,000 romance voyeurs can’t be wrong.  It’s “merely” a matter of magnification of tendencies that have been there for millennia of course.  But now the world is Peyton Place.  Everyone is now using each other for content, to exploit the world to get attention for one’s own benefit.  Do you really want to play this game?

10 Arguments for Deleting Your Social Media Accounts Right Now

By Jaron Lanier.  Review by Franklin Foer, NYT Book Review, Sunday, July 1, 2018

From the review:

“He [Lanier] worries that our reliance on big tech companies is ruining our capacity for spirituality, by turning us into robotic extensions of their machines. The companies, he argues, have no appreciation for the ‘mystical spark inside you.’ They don’t understand the magic of human consciousness and, therefore, will recklessly destroy it.”

Yes, the “mystical spark”. It’s true, the brain trusts of the big social media companies don’t understand this aspect of human existence. They are zealots in the School of Behaviorism, in which the will to objectivity has reduced the status of the inner experience to that of a mere footnote. This polemic that is instantiated today as the quarrel between Behaviorism and the advocates of the existence of the “mystical spark” stems from the great quarrel between science and the humanities, ultimately between science and poetry, which stretches back into ancient times.  Plato is famous for coming down squarely against the poets and for the sciences concerning this crucial question.  “And the rest is history” as they say.  The Enlightenment had recalled the Greek ideal and Reason as a result becomes the new god, because it has been reified, it has become the Supreme Court which functions as the court of last appeal.  One can’t put reasoning at the service of the passions without a return to the triumph of irrational chaos, so the reasoning goes.  But the law of unintended consequences is always lurking to find the flaw in the given conception.  Since the time of Condorcet we have been grappling with this enormous question, and one then wishes, realizing the enormous stakes invlolved, to look back at the viewpoints gathered under the heading of the Sturm und Drang, or “Counter-enlightenment”, which indicts Reason as the reified master myth of a death-oriented civilization.

The following passage in Foer’s review of Lanier’s book highlights the sorry state of affairs we have arrived at in our society in our “liberal” regime of tolerance:

“Critics of the big technology companies have refrained from hectoring users to quit social media. It’s far more comfortable to slam a corporate leviathan than it is to shame your aunt or high school pals — or, for that matter, to jettison your own long list of “friends.” As our informational ecosystem has been rubbished, we have placed very little onus on the more than two billion users of Facebook and Twitter. So I’m grateful to Jaron Lanier for redistributing blame on the lumpen-user, for pressing the public to flee social media. He writes, ‘If you’re not part of the solution, there will be no solution.'”

Which calls to mind a passsage in a book by Matthew B. Crawford, The World Beyond Your Head (2015):

“We abstain on principle from condemning activities that leave one compromised and degraded, because we fear that disapproval of the activity would be paternalistic toward those who engage in it.  We also abstain from affirming some substantive picture of human flourishing, for fear of impoising our values on others.  This gives us a pleasant feeling:  we have succeeded in not being paternalistic or presumptuous.  The prioity we give for avoiding these vices in particular is rooted in our respect for persons, understood as autonomous.  “People should be allowed to make their own decisions.”  Liberal agnosticism about the good life has some compelling historical reasons behind it.  It is a mind-set that we consciously cultivated as an antidote to the religious wars of centuries ago, when people slaughtered one another over ultimate differences.  After WWII, revulsion with totalitarian regimes of the right and left made us redouble our liberal commitment to neutrality.  But this stance is maladaptive in the context of 21st century capitalism…the everyday threats to your well-being no longer come from an ideological rival or a theological threat to the liberal secular order.  They are native to that order.”

Foer says at one point in his review that generally speaking, critics have avoided calling out the average “lumpen-user” as complicit in Facebook et al’s “manipulation machine”. But this is inevitable in an ideological framework which is organized around the principle of “vox populi est vox dei” and the liberal tolerance stemming therefrom, the basic principles of which I sketched above.  If the numbers are there, the people have spoken and that is the final word. But truth cannot be decided by numbers, and the democratic experiment implodes as a result of this realization. The “lumpen-user”.  I like that phrasing. One imagines a mechanism wherein the “lumpen-user” is made progressively more “lumpen” by increasingly greater enmeshment in social media, television, and internet use in general. And this is just the beginning. At such a nodal point I always think of a scene in the movie “The President’s Analyst”, where the head of TPC (The Phone Company) outlines plans to the protagonist of installing transistors right inside the brain so you can make and receive calls just by thinking.  Sherry Turkle assures me that the best minds at MIT are working hard at implementing just such a system now. Who wants to have that little tablet in one’s hand all the time when it can all be right inside you?

The overall point must be stressed: We must speak out now against participation in this soul-draining mechanism.  Of course speaking out in some inchoate way is not enough.  We must develop the sensibility that is adequate to countering this master-myth of reified Reason.  Reason has betrayed the mind; what is the way out of this labyrinth? That is my project, some of which has been outlined here beginning this past January.

If it takes “shaming” the lumpen-user into realizing the stakes, then so be it. To me, shaming isn’t necessary, if one can outline the advantages of rejection of this ideologically-based mechanism.  This grand initiative, bequeathal of Condorcet and Comte, now takes on its true form as the greatest enemy of the human spirit that has as yet materialized.  Victims of technological rationalism unite!