The Attack That Broke the Net’s Safety Net

I comment here on the editorial with the selfsame title that appeared in The New York Times today.  But it is shot through with that modern, twenty-first century iteration of myopia; this editorial was written as it were from the inside.  The editorial board of The New York Times is too close to the problem.  As such, statements contained in this editorial such as “Big Tech is slowly making its products safer for society” can be conveyed as if there was some overarching truth to them.  Forgive me if I think such efforts are akin to rearranging the deck chairs on the Titanic.

Look at the bigger picture.  This bigger picture is alluded to in the editorial in such statements as “But the viral spread of the Christchurch shooting video shows the limits of the content moderation machine in the face of technologies that have been designed to be attention traps.”  I don’t see redesign of the attention traps on the horizon, do you?  “It must be a priority to redesign technology to respect the common good, instead of ignoring it.”  This is a mere platitude.  And then there is the perplexing downside to increased regulation:  “More moderation comes with heavy risks, of course. Decisions about the limits of free speech would shift to companies whose priorities are driven by shareholders.” The technological imperative cuts off all avenues of escape.  Either way, the repressive structures inherent in the technology and in the ideology which authorizes this technology will carry the day.

The true nature of the technological imperative is here underscored:  It follows its own logic and it is a logic of nihilism.  All movement within the great internet machine is dead movement. We are confronted here with the inherent nihilistc bias of the Scientific Spirit–which lacks the ability to determine value on the human scale.  For the pure knowledge drive, the underlying force propelling the scientific spirit is intrinsically unselective.  It does not distinguish between the great and the small and is as a result incapable of providing any unifying mastery.  The only criterion it recognizes is that of certainty, to which all other considerations–value for human life included–are irrelevant.

One is reminded at this juncture of Nietzsche’s famous opening to the Genealogy of Morals.  “We knowers are unknown to ourselves, and for a good reason:  how can we ever hope to find what we have never looked for?  There is a sound adage that runs:  ‘Where a man’s treasure lies, there lies his heart.’  Our treasure lies in the beehives of our knowledge.  We are perpetually on our way thither, being by nature winged insects and honey gatherers of the mind.  The only thing that lies close to our heart is the desire to bring something home to the hive.  As for the rest of life–so-called ‘experience’– who among us is serious enough for that?  Or has time enough? When it comes to such matters, our heart is simply not in it–we don’t even lend our ear.  Rather, as a man divinely abstracted and self-absorbed into whose ears the bell has just drummed the twelve strokes of noon will suddenly awake with a start and ask himself what hour has actually struck, we sometimes rub our ears after the event and ask ourselves, astonished and at a loss, ‘What have we really experienced?’–or rather, ‘Who are we, really?’  And we recount the twelve tremulous strokes of our experience, our life, our being, but unfortunately count wrong.  The sad truth is that we remain necessarily strangers to ourselves, we don’t understand our own substance, we must mistake ourselves; the axiom, ‘Each man is farthest from himself’, will hold for us to all eternity. Of ourselves we are not ‘knowers’…”

Technology 101–Neil Postman

It’s easy to get confused about what’s happening all around us as technology advances so rapidly.  If one tries to read Heidegger’s Question Concerning Technology, for example, one may not understand everything that is contained there, with its many conceptual twists and turns and heavy peppering of neologisms. But this is not the case with this presentation, “Five Things We Need to Know About Technological Change”.

Idea One is technology as Faustian bargain.  This is easy to grasp, but as Postman avers, it’s really quite surprising how many people treat the advanced technologies as unmixed blessings.  “What will a new technology do?” becomes counterposed to “What will a new technology undo?” Postman brings our attention to the cost-benefit analysis.

Then he moves to idea number two, which is that benefits brought about by the new technolgies are unequally distributed.  Who benefits?  In the case of the computer, the obvious recipients of benefits are the large corporations.  The PC revolution of the 1970s made it seem for a time that the private individual could share in this boon which up till that time seemed to be the sole province of large-scale industry.  The World at One’s Fingertips for Everyman and Everywoman.   But even in 1998, when Postman delivered this lecture, he realized the downsides:  “But to what extent has computer technology been an advantage to the masses of people?  To steel workers, vegetable store owners, automobile mechanics, musicians…[t]hese people have had their private matters made more accessible to powerful institutions.  The are more easily tracked and controlled; they are subjected to more examinations, and are increasingly mystified by the decisions made about them.  They are more than ever reduced to numerical objects…[t]hese people are losers in the great computer revolution, the winners, which include among others computer companies, multinational corporations and the nation-state, will, of course, encourage the losers to be enthusiastic about computer technology.  That is the way of winners, and so in the beginning they told the losers that with personal computers the average person can balance a checkbook more neatly, keep better track of recipes, and make more logical shopping lists.  Then they told them that computers will make it possible to vote at home, shop at home, get all the entertainment they wish at home, and thus make community life unnecessary.”

The third idea is perhaps Postman’s most interesting contribution.  Embedded in every technology there is a powerful idea.  To a man with a computer, everything looks like information.  One is reminded of the old adage that information drives knowledge out of circulation.  And anyway, it’s all up in the cloud, why do I have to keep any of this stuff, trivial or profound, in my own brain?  “Every technology has a philosophy which is given expression in how the technology makes people use their minds, in which of our senses it amplifies, in which of our emotional and intellectual tendencies it disregards.”  In short, it encourages, nay, legislates, the kind of Weltanschauung which is appropriate to an authoritarian technocracy.  And this Weltanschauung is Positivism.  That which cannot be measured and observed and replicated experimentally does not count as knowledge, or even as valuable.  And what then of the inner person?

The fourth idea is that technological change is not additive, it is what Postman calls ecological.  It changes the basic fabric of the culture.  “In the year 1500, after the printing press was invented, you did not have the old Europe plus the printing press.  You had a different Europe.”  Technological innovation pays no heed to its potential impact on the culture it is introduced into.  This idea is also found in technological skeptics such as Jacques Ellul.  Politics does not drive technology, culture does not drive technology, but the other way around.  It is a tsunami which orders whatever it can encompass to its specifications.  This includes the human heart, if it be docile enough to fail to find the will to preserve itself.  “Who, we may ask, has had the greatest impact on American education in this century?  If you are thinking of John Dewey or any other education philosopher, I must say you are quite wrong.  The greatest impact has been made by quiet men in grey suits in a suburb of New York City called Princeton, New Jersey.  There, they developed and promoted the technology known as the standardized test, such as IQ tests, the SATs and the GREs.  Their tests redefined what we mean by learning, and have resulted in our reorganizing the curriculum to accommodate the tests.”

We come to the fifth idea, that at a certain point in their penetration into the collective psyche, successful technologics become mythic.  As a myth, it becomes enmeshed into the basic order of things, from the point of view of the user.  Postman cites an example from his pedagogical experience.  He asked his students if they knew when the alphabet was invented.  The question “astonished them.  It was as if I asked them when clouds and trees were invented.”  Postman died before the advent of the so-called smartphone.  But he anticipated the attitude people have about this technology.  Not so much that people can’t believe it was never there, but that it’s been there for some period of time which makes it seem like it has an enduring presence, as if it were around for a hundred years.  People seem to forget that it has only been 12 short years since the introduction of the smartphone into Western society.  “What I am saying is that our enthusiasm can turn into a form of idolatry and our belief in its benificence can be a false absolute.”  Witness the widespread disbelief in these last years at the revelations of Edward Snowden and the sudden dawning on the psyche of the somnambulist that “Silicon Valley is Not Your Friend”, as even many principal players in the technological revolution, Tristan Harris and his confreres, began to warn of the excesses inherent in the smartphone and the internet.  But this state of affairs was apparent 80 years ago with the introduction of television, if not before.  I have written in this blog of Samuel Butler’s Erewhon, which was published in 1872, in which he warned of the rise of the potentially conscious machine.  One hundred and forty-seven years ago.

Then Postman sums up his short talk with a clarity that is apparent throughout his presentation.  First idea:  “The greater the technology, the greater the price.” Second idea:  “There are always winners and losers.”  Third idea: “There is embedded in every great technology an epistemological, political or social prejudice.”  Fourth, “Technology is ecological, which means, it changes everything.”  And fifth, “Technology tends to become mythic, and therefore tends to control more of our lives than is good for us.”  He closes with that same idea that Herbert Marcuse expresses in his seminal essay “Some Social Implications of Modern Technology”, written 57 years before Postman’s talk: “We have been willing to shape our lives to fit the requirements of technology, not the requirements of culture. This is a form of stupidity.”

The Persuasion Architecture of the Attention Economy

It’s a broad focus, but one which is necessary now.  Zucked by Roger McNamee.  Just came out.  This guy was with ol’ Zuck at the beginning.  One of those gradual disillusionment stories.  Let us eliminate from consideration the prospect that Zuckerberg was overtly malevolent in crafting and implementing his designs for his “platform”.  So, what was it?  In the review of the book by Tom Bissell to be found in the NYT today, there was a telling sentence in the comments section appended this morning:  Facebook “leaves the faceless mob in charge of your once independent thinking.”  But surely this explanation is not a comprehensive one.  How did the mind give up its control to the faceless mob?  This intertwines with the story of the rise of Rationalism in the 17th century.  Then, superstition reigned under the protective umbrella of the Catholic  Church.  To put forward the opinion that the earth moved around the sun, in the wrong company, could get you burned as a heretic.  So, everything had to be put to the test of rationality and logic to combat this madness.  This is individual rationality.  But in the 19th century individual rationality began to be replaced by technological rationality.  It was the dictates of the device–machines, automobiles, that reconfigured the relationship between the individual and his or her environment.  The dictates of the device took precedence over the dictates of individual conscience and logic.

Zuckerberg and his ilk, then, have managed to “soften up” the mind of everyman and everywoman through the relentless implementation of the Fogg Behavior Model and related techniques, with social consciousness at its heart.  Everyone wants to belong.  The costs of this belonging don’t seem to be given their proper weight by most people.    “Simply stating what options are more popular, or merely preselecting default choices, is often enough to influence decision…”  we call this Status-quo bias.  In this way social deviance is pushed into a disreputable corner.  There’s something wrong with you if you don’t identify with the masses, or so goes this grotesque reasoning.  Our democratic system authorizes this. And after all, one isn’t making anyone do anything they don’t want to do in the first place by implementing Fogg’s Persuasive Design methodology in our internet marketplace.   People need structure.  They need routine.  The flight from self demands it.  Nowadays the smartphone provides the template for our daily routines.  It orders our life in ways we couldn’t manage without it.  You have been trained, smartphone users.  Oh, there’s no real self anyway, so why put any faith in something that doesn’t exist?  The softening up.  Look at all those nice red dots popping up every 40 seconds, it feels so good, a mini-orgasmatron waiting to stimulate me every waking moment.  Wait till they find a way to electronically integrate this with the clitoris…the faceless mob has been in charge since time immemorial, and with the rise of democracy, it became the sovereign force in society.  Of course this force is wretched and stupid and can easily be manipulated by bad actors. We have found in the last years just how easily manipulable it is.

Technology and the Law of Unintended Consequences

I consider today an article that appeared in the New York Times on Jan 25.  One whole day ago.  It is by Cal Newport, “a computer scientist and author”.  He claims that “Steve Jobs would not approve” of the way we use the iPhone in 2019, contrasting the typical user of this moment in time with that of the user of 12 years ago.  This, after all, was the purple dawn of the smartphone in the West.  (I guess they had them in Korea and Japan a few years before that.)  This is not a coincidence, as it’s apparent to anyone who looks at the laws of social dynamics that such a device is a perfect fit for the thoroughly collectivist cultures in the Far East.

Newport claims that Jobs never saw what was coming.   Newport relates that in a speech Jobs gave introducing the iPhone, he characterized it as “the best iPod we’ve ever made.” Then Newport goes on to say that “he doesn’t dedicate any significant time to discussing the phone’s internet connectivity features until more than 30 minutes into the address.”  Oh, that proves it.  because he delayed speaking of the internet connectivity aspect of the iPhone until the metaphorical page 12 of Section One of the New York Times daily (“buried in the back pages”), that must mean that he had no idea that internet connectivity would be a significant aspect of the iPhone’s operation.  Could it not be instead that he wished to downplay this aspect of its performance?

But this is all from a man who could safely be said to occupy a place on the “Natürwissenschaft-Geisteswissenschaft spectrum” (my term) that favors the former and disfavors the latter.  For those unfamiliar with this concept, it refers to the age-old quarrel between the traditional spirit of science and that of poetry.   Wilhelm Dilthey recharacterized it in the 19th century as the quarrel between science and the humanities in general.  Geisteswissenschaft, humanities, Natürwissenschaft, hard science.  In undertaking a course of higher education, one typically chooses one path or the other.  It’s possible for one individual to take hard science courses and English literature courses, and many do.  But the predominant trajectory will involve one category or the other.  How many humanities courses did Mark Zuckerberg take?  Of course it all comes down to money.  Where is the money in majoring in English literature?  And then ask the question, how much money has B. J. Fogg made in the last 12 years?  (See the link to the Stanford Persuasive Technology Lab on this site for more information on B. J. Fogg and his Behavioral Model).

So Newport comes at the question from the perspective of Natürwissenschaft, certainly.  I argue that this creates a blind spot concerning the whole question of technology, masking off the wider implications of understanding the trajectory of the phenomenon in question.  Of course this also works in the other direction.  Critique must be unsparing as regards the whole Natürwissenschft-Geisteswissenschaft question.  But we are here concerned with the spectre of rampant technology.  Even Newport concedes that the average person is no longer the master of this technology but that the reverse has occurred.  It seems to me that the only effective antidiote would be to attempt to redress the balance which is so far in favor of Natürwissenschaft that the opposite tendency is all but ignored.

My overall reaction to this article by Mr. Newport is one of incredulity:  How could he not understand that the smartphone would develop in any other way than the way it did?  “Mr. Jobs seemed to understand the iPhone as something that would help us with a small number of activities–listening to music, placing calls, generating directions.  He didn’t seek to radically change the rhythm of users’ daily lives…[p]ractically speaking, to be a minimalist smartphone user means that you deploy this device for a small number of features that do things you value (and that the phone does particularly well), and then outside of these activities, put it away.”  The ludicrously unrealistic tenor of these remarks beggars belief.  Basic concepts in psychology illustrate that this is pie-in-the sky thinking about the basic tendencies of a psychic organism that is contantly being molded into less and less autonomous configurations.  We are speaking here ultimately about compliant efficiency as a religious phenomenon.  The dictates of the operation of the apparatus determines our morality.  We”should” repond to the push notification immedately because the device makes it possible by redefining reality.  The red dot on the incoming email means urgency.  One can ignore it.  One can turn it to greyscale.  But the dictates of the apparatus insist that it is otherwise.

Newport goes on the make pathetically weak recommendations to combat the smartphone’s march to ubiquity in the psyche of the captured user.  “[I]f your work doesn’t absolutely demand that that you be accessible by email when away from your desk, delete the Gmail app or disconnect the built-in email client from your office servers.  It’s occasionally convenient to check in when out and about, but this occasional convenience almost always comes at the cost of developing a compulsive urge to monitor your messages constantly (emphasis mine–dw)”.  Now, suddenly, Newport understands what psychic forces are in play here.  This phenomenon, call it FOMO or operant conditioning or whatever, which is far stronger than most people realized before 2007 in the minds of those who are most susceptible to it (which is 90% of the population it seems), necessitates one solution and one solution only, if the desired outcome is some moderate measure of autonomy in this increasingly mechanized social environment–ditching the smartphone entirely.

Email Like a Robot

I reference in this essay the John Herrman article which appeared in the New York Times dated Nov. 7, 2018.

All you gmail users out there already know what I’m talking about.  Smart Reply, Smart Compose.  Smart Compose is the new one.  Say you type in “Have a look…” to one of your co-workers concerning your new widget design.  Smart Compose will offer a way to end the sentence:  “…and let me know what you think”, which appears in a fainter typescript symbolic of its tentative status.  They’re not imposing, just suggesting!  What could be wrong with that?  This technology is so helpful, and yet I feel like something’s being taken away from me…and then guilty for not being able to appreciate this wonderful helpfulness.

The AI in Smart Compose is learning how your individual mind works.  But at the same time it collectivizes thought.  If more people use a certain phrasing than another, the one used more will come up first as the pertinent suggestion.  This is especially efficacious in situations where learned performance trumps expression of will, as in work communications.  Smart Compose is best at exploiting ways we’ve already been programmed, by work, social convention, and the tools we use in the modern world to communicate.  The question of responsibility arises–are you responsible for text that is computer-generated automatically in your name?  One can call this human self-automation.  What fun!

Email thus becomes the backbone of a mosaic of components in which a general routinization of communicative strategies reaches for a predominant position in human thought.

This reminds me of Gustave Flaubert’s Dictionary of Received Ideas.  This is a little compendium of terms, and how people think about them, which was the inspiration for Flaubert’s novel  Bouvard and Pécuchet (1880), and predated it by as much as thirty years, which he conceived of as illustrating “the all-pervasive power of stupidity”, as A.J. Krailsheimer puts it in his Introduction to the Penguin Classics edition.  Example entries: AMBITION,  “Always ‘insane’ unless it is ‘noble'”.  ARISTOCRACY, “Despise and envy it.” BASES (OF SOCIETY), “Id est property, the family, religion, respect for authority.  Show anger if these are attacked.” BLACK: “Always followed by ‘as ebony’ or preceded by ‘jet.'”  C through Z is just as disheartening.  Routinized thought is as old as the hills.  But we are giving it a new dimension.

 

City of Surveillance

This is the title of an October 23, 2018 article to be found in The Guardian UK.  It refers to Toronto, Ontario, which is now negotiating with Sidewalk Labs, Google’s sister company, to implement the latest iteration of the “smart city”.  Toronto’s waterfront neighborhood Quayside is the gunea pig.  As Gabrielle Canon, author of the Guardian article, states, “Despite Justin Trudeau’s exclamation that, through a partnership with Google’sister company Sidewalk Labs, the waterfront neighborhood could help turn the area into a ‘thriving hub for innovation’, questions immediately arose over how the new wired town would collect and protect data.  A year into the project, those questions have resurfaced following the resignation of a privacy expert, Dr. Ann Cavoukian, who claimed she left her consulting role on the initiative to ‘send a strong statement’ about the data privacy issues the project still faces.”

Canon writes a bit further on in the article that “[a]fter initially being told that the data collected would be wiped and unidentifiable, Cavoukian told reporters she learned during a meeting last week that third parties could access identifiable information gathered in the district.”  For Cavoukian, this crossed the line.   She told the Global News, “When I heard that, I said: “I’m sorry.  I can’t support this.  I have to resign because you committed to embedding privacy by design into every aspect of your operation” [italics mine–dw].

Canon continues by citing other worrying examples of the shell game being utilized by Big Tech.  Saaida Muzaffar of TechGirlsCanada stepped down from its digital strategy advisory panel, announing that TechGirlsCanada was not adequately addressing privacy issues.

The response by the representatives of Big Tech were predictable.  Alyssa Harvey Dawson, Sidewalk Labs’ head of data governance, asserted that the Quayside project would “set a new model for responsible data used in cities.  The launch of Sidewalk Toronto sparked an active and healthy public discussion about data privacy, ownership, and governance.”  In a summary of the latest draft of the proposal, she recommends that the data collected in the city should be controlled and held by an independent civic data trust and that “all entities proposing to collect or use urban data (including Sidewalk Labs) will have to file a Responsible Data Impact Assessment with the Data Trust that is publicly available and reviewable.” But is this enough to ensure that privacy by design is embedded into every aspect of the system?  Jathan Sadowski, a lecturer on the ethics of technology, does not seem to be convinced of this.  He cautions that ..,[b]uilding the smart urban future cannot also mean paving the way for tech billionaires to fulfill their dreams of ruling over cities.  If it does, that’s not a future we should want to live in.”  In response to concerns of critics, Sidewalk Labs has issued a statement, saying “At yesterday’s meeting of Waterfront Toronto’s Digital Strategy Advisory Panel, it became clear that Sidewalk Labs would play a more limited role in near-term discussions about a data governance framework at Quayside.  Sidewalk Labs has committed to implement, as a company, the principles of Privacy by Design.  Though that question is settled, the question of whether other companies involved in the Quayside project would be required to do so is unlikely to be worked out soon and may be out of Sidewalk Labs’ hands.  For these reasons and others, Dr. Cavoukian has decided that it does not make sense to continue working as a paid consultant for Sidewalk Labs.  Sidewalk Labs benefited greatly from her advice, which helped the company formulate the strict privacy policies it has adopted, and looks forward to calling on her from time to time for her advice and feedback.”

I have quoted extensively from this article because I believe that this progression of events is likely going to constitute a paradigm for the advance of repressive technologies all over the world.  Initiatives will typically begin with a professed strong commitment to privacy.  Then little by little it will become clear that even though the original consortium may remain committed to these principles, it will not be able to guarantee that third parties, which always insert themselves somewhere along the line, will exercise that same commitment.

For her part, perhaps Dr. Cavoukian will now rethink the position she professes about the general principles of Privacy by Design.  She avers that privacy and the interests of large-scale business should not be viewed as zero sum, insisting that we can have strict privacy rules in place concomittant with an environment in which large-scale business practice proceeds without serious impediment.  The paradigm I have outlined here suggests otherwise.

See how it gathers, the technological storm, which is now upon us…

New Era Begins Wednesday

Freak-out at 2:18pm.  You can’t opt out.  It sets off a loud sound.  The first nationwide cellphone alert will occur at 2:18pm Eastern, tomorrow, Oct. 3, 2018.  This is only a test.  Had this been an actual emergency, scary looking men will comandeer your device, appear onscreen and scare the bejeezus out of you.  The intent is to use these alerts “only for advance warning of national crises” according to CBS News today.  “This is something that should not be used for a political agenda”, Jeh Johnson, former Secretary of Homeland Security warned.  They have the laws and protocols in place to avoid this, Johnson said.  “Should not be used?” The weakness of this assurance from a former high official in the United States Government is staggering.  And indeed, the reader may remember at this juncture what happened in Hawaii this past January, in which over a million people, virtually the entire population of Hawaii, received a seemingly official  alert on their phones that nuclear missiles were heading for them.  A member of the faculty of Columbia University, Andy Whitehouse, had the temerity to raise of couple of issues.  “The fact that you can’t turn this off, that it will be something that will arrive on your phone whether you like it or not, I think was perhaps upsetting and concerning to some people,” he is quoted as saying. I ask the reader to think about what these devices, and the technological society as a whole, are shaping up to be. More and more, elective uses are going to be supplemented by nonelective ones.  The seeds are being sown in our grand social engineering experiment.