Technology 101–Neil Postman

It’s easy to get confused about what’s happening all around us as technology advances so rapidly.  If one tries to read Heidegger’s Question Concerning Technology, for example, one may not understand everything that is contained there, with its many conceptual twists and turns and heavy peppering of neologisms. But this is not the case with this presentation, “Five Things We Need to Know About Technological Change”.

Idea One is technology as Faustian bargain.  This is easy to grasp, but as Postman avers, it’s really quite surprising how many people treat the advanced technologies as unmixed blessings.  “What will a new technology do?” becomes counterposed to “What will a new technology undo?” Postman brings our attention to the cost-benefit analysis.

Then he moves to idea number two, which is that benefits brought about by the new technolgies are unequally distributed.  Who benefits?  In the case of the computer, the obvious recipients of benefits are the large corporations.  The PC revolution of the 1970s made it seem for a time that the private individual could share in this boon which up till that time seemed to be the sole province of large-scale industry.  The World at One’s Fingertips for Everyman and Everywoman.   But even in 1998, when Postman delivered this lecture, he realized the downsides:  “But to what extent has computer technology been an advantage to the masses of people?  To steel workers, vegetable store owners, automobile mechanics, musicians…[t]hese people have had their private matters made more accessible to powerful institutions.  The are more easily tracked and controlled; they are subjected to more examinations, and are increasingly mystified by the decisions made about them.  They are more than ever reduced to numerical objects…[t]hese people are losers in the great computer revolution, the winners, which include among others computer companies, multinational corporations and the nation-state, will, of course, encourage the losers to be enthusiastic about computer technology.  That is the way of winners, and so in the beginning they told the losers that with personal computers the average person can balance a checkbook more neatly, keep better track of recipes, and make more logical shopping lists.  Then they told them that computers will make it possible to vote at home, shop at home, get all the entertainment they wish at home, and thus make community life unnecessary.”

The third idea is perhaps Postman’s most interesting contribution.  Embedded in every technology there is a powerful idea.  To a man with a computer, everything looks like information.  One is reminded of the old adage that information drives knowledge out of circulation.  And anyway, it’s all up in the cloud, why do I have to keep any of this stuff, trivial or profound, in my own brain?  “Every technology has a philosophy which is given expression in how the technology makes people use their minds, in which of our senses it amplifies, in which of our emotional and intellectual tendencies it disregards.”  In short, it encourages, nay, legislates, the kind of Weltanschauung which is appropriate to an authoritarian technocracy.  And this Weltanschauung is Positivism.  That which cannot be measured and observed and replicated experimentally does not count as knowledge, or even as valuable.  And what then of the inner person?

The fourth idea is that technological change is not additive, it is what Postman calls ecological.  It changes the basic fabric of the culture.  “In the year 1500, after the printing press was invented, you did not have the old Europe plus the printing press.  You had a different Europe.”  Technological innovation pays no heed to its potential impact on the culture it is introduced into.  This idea is also found in technological skeptics such as Jacques Ellul.  Politics does not drive technology, culture does not drive technology, but the other way around.  It is a tsunami which orders whatever it can encompass to its specifications.  This includes the human heart, if it be docile enough to fail to find the will to preserve itself.  “Who, we may ask, has had the greatest impact on American education in this century?  If you are thinking of John Dewey or any other education philosopher, I must say you are quite wrong.  The greatest impact has been made by quiet men in grey suits in a suburb of New York City called Princeton, New Jersey.  There, they developed and promoted the technology known as the standardized test, such as IQ tests, the SATs and the GREs.  Their tests redefined what we mean by learning, and have resulted in our reorganizing the curriculum to accommodate the tests.”

We come to the fifth idea, that at a certain point in their penetration into the collective psyche, successful technologics become mythic.  As a myth, it becomes enmeshed into the basic order of things, from the point of view of the user.  Postman cites an example from his pedagogical experience.  He asked his students if they knew when the alphabet was invented.  The question “astonished them.  It was as if I asked them when clouds and trees were invented.”  Postman died before the advent of the so-called smartphone.  But he anticipated the attitude people have about this technology.  Not so much that people can’t believe it was never there, but that it’s been there for some period of time which makes it seem like it has an enduring presence, as if it were around for a hundred years.  People seem to forget that it has only been 12 short years since the introduction of the smartphone into Western society.  “What I am saying is that our enthusiasm can turn into a form of idolatry and our belief in its benificence can be a false absolute.”  Witness the widespread disbelief in these last years at the revelations of Edward Snowden and the sudden dawning on the psyche of the somnambulist that “Silicon Valley is Not Your Friend”, as even many principal players in the technological revolution, Tristan Harris and his confreres, began to warn of the excesses inherent in the smartphone and the internet.  But this state of affairs was apparent 80 years ago with the introduction of television, if not before.  I have written in this blog of Samuel Butler’s Erewhon, which was published in 1872, in which he warned of the rise of the potentially conscious machine.  One hundred and forty-seven years ago.

Then Postman sums up his short talk with a clarity that is apparent throughout his presentation.  First idea:  “The greater the technology, the greater the price.” Second idea:  “There are always winners and losers.”  Third idea: “There is embedded in every great technology an epistemological, political or social prejudice.”  Fourth, “Technology is ecological, which means, it changes everything.”  And fifth, “Technology tends to become mythic, and therefore tends to control more of our lives than is good for us.”  He closes with that same idea that Herbert Marcuse expresses in his seminal essay “Some Social Implications of Modern Technology”, written 57 years before Postman’s talk: “We have been willing to shape our lives to fit the requirements of technology, not the requirements of culture. This is a form of stupidity.”

The Persuasion Architecture of the Attention Economy

It’s a broad focus, but one which is necessary now.  Zucked by Roger McNamee.  Just came out.  This guy was with ol’ Zuck at the beginning.  One of those gradual disillusionment stories.  Let us eliminate from consideration the prospect that Zuckerberg was overtly malevolent in crafting and implementing his designs for his “platform”.  So, what was it?  In the review of the book by Tom Bissell to be found in the NYT today, there was a telling sentence in the comments section appended this morning:  Facebook “leaves the faceless mob in charge of your once independent thinking.”  But surely this explanation is not a comprehensive one.  How did the mind give up its control to the faceless mob?  This intertwines with the story of the rise of Rationalism in the 17th century.  Then, superstition reigned under the protective umbrella of the Catholic  Church.  To put forward the opinion that the earth moved around the sun, in the wrong company, could get you burned as a heretic.  So, everything had to be put to the test of rationality and logic to combat this madness.  This is individual rationality.  But in the 19th century individual rationality began to be replaced by technological rationality.  It was the dictates of the device–machines, automobiles, that reconfigured the relationship between the individual and his or her environment.  The dictates of the device took precedence over the dictates of individual conscience and logic.

Zuckerberg and his ilk, then, have managed to “soften up” the mind of everyman and everywoman through the relentless implementation of the Fogg Behavior Model and related techniques, with social consciousness at its heart.  Everyone wants to belong.  The costs of this belonging don’t seem to be given their proper weight by most people.    “Simply stating what options are more popular, or merely preselecting default choices, is often enough to influence decision…”  we call this Status-quo bias.  In this way social deviance is pushed into a disreputable corner.  There’s something wrong with you if you don’t identify with the masses, or so goes this grotesque reasoning.  Our democratic system authorizes this. And after all, one isn’t making anyone do anything they don’t want to do in the first place by implementing Fogg’s Persuasive Design methodology in our internet marketplace.   People need structure.  They need routine.  The flight from self demands it.  Nowadays the smartphone provides the template for our daily routines.  It orders our life in ways we couldn’t manage without it.  You have been trained, smartphone users.  Oh, there’s no real self anyway, so why put any faith in something that doesn’t exist?  The softening up.  Look at all those nice red dots popping up every 40 seconds, it feels so good, a mini-orgasmatron waiting to stimulate me every waking moment.  Wait till they find a way to electronically integrate this with the clitoris…the faceless mob has been in charge since time immemorial, and with the rise of democracy, it became the sovereign force in society.  Of course this force is wretched and stupid and can easily be manipulated by bad actors. We have found in the last years just how easily manipulable it is.

Technology and the Law of Unintended Consequences

I consider today an article that appeared in the New York Times on Jan 25.  One whole day ago.  It is by Cal Newport, “a computer scientist and author”.  He claims that “Steve Jobs would not approve” of the way we use the iPhone in 2019, contrasting the typical user of this moment in time with that of the user of 12 years ago.  This, after all, was the purple dawn of the smartphone in the West.  (I guess they had them in Korea and Japan a few years before that.)  This is not a coincidence, as it’s apparent to anyone who looks at the laws of social dynamics that such a device is a perfect fit for the thoroughly collectivist cultures in the Far East.

Newport claims that Jobs never saw what was coming.   Newport relates that in a speech Jobs gave introducing the iPhone, he characterized it as “the best iPod we’ve ever made.” Then Newport goes on to say that “he doesn’t dedicate any significant time to discussing the phone’s internet connectivity features until more than 30 minutes into the address.”  Oh, that proves it.  because he delayed speaking of the internet connectivity aspect of the iPhone until the metaphorical page 12 of Section One of the New York Times daily (“buried in the back pages”), that must mean that he had no idea that internet connectivity would be a significant aspect of the iPhone’s operation.  Could it not be instead that he wished to downplay this aspect of its performance?

But this is all from a man who could safely be said to occupy a place on the “Natürwissenschaft-Geisteswissenschaft spectrum” (my term) that favors the former and disfavors the latter.  For those unfamiliar with this concept, it refers to the age-old quarrel between the traditional spirit of science and that of poetry.   Wilhelm Dilthey recharacterized it in the 19th century as the quarrel between science and the humanities in general.  Geisteswissenschaft, humanities, Natürwissenschaft, hard science.  In undertaking a course of higher education, one typically chooses one path or the other.  It’s possible for one individual to take hard science courses and English literature courses, and many do.  But the predominant trajectory will involve one category or the other.  How many humanities courses did Mark Zuckerberg take?  Of course it all comes down to money.  Where is the money in majoring in English literature?  And then ask the question, how much money has B. J. Fogg made in the last 12 years?  (See the link to the Stanford Persuasive Technology Lab on this site for more information on B. J. Fogg and his Behavioral Model).

So Newport comes at the question from the perspective of Natürwissenschaft, certainly.  I argue that this creates a blind spot concerning the whole question of technology, masking off the wider implications of understanding the trajectory of the phenomenon in question.  Of course this also works in the other direction.  Critique must be unsparing as regards the whole Natürwissenschft-Geisteswissenschaft question.  But we are here concerned with the spectre of rampant technology.  Even Newport concedes that the average person is no longer the master of this technology but that the reverse has occurred.  It seems to me that the only effective antidiote would be to attempt to redress the balance which is so far in favor of Natürwissenschaft that the opposite tendency is all but ignored.

My overall reaction to this article by Mr. Newport is one of incredulity:  How could he not understand that the smartphone would develop in any other way than the way it did?  “Mr. Jobs seemed to understand the iPhone as something that would help us with a small number of activities–listening to music, placing calls, generating directions.  He didn’t seek to radically change the rhythm of users’ daily lives…[p]ractically speaking, to be a minimalist smartphone user means that you deploy this device for a small number of features that do things you value (and that the phone does particularly well), and then outside of these activities, put it away.”  The ludicrously unrealistic tenor of these remarks beggars belief.  Basic concepts in psychology illustrate that this is pie-in-the sky thinking about the basic tendencies of a psychic organism that is contantly being molded into less and less autonomous configurations.  We are speaking here ultimately about compliant efficiency as a religious phenomenon.  The dictates of the operation of the apparatus determines our morality.  We”should” repond to the push notification immedately because the device makes it possible by redefining reality.  The red dot on the incoming email means urgency.  One can ignore it.  One can turn it to greyscale.  But the dictates of the apparatus insist that it is otherwise.

Newport goes on the make pathetically weak recommendations to combat the smartphone’s march to ubiquity in the psyche of the captured user.  “[I]f your work doesn’t absolutely demand that that you be accessible by email when away from your desk, delete the Gmail app or disconnect the built-in email client from your office servers.  It’s occasionally convenient to check in when out and about, but this occasional convenience almost always comes at the cost of developing a compulsive urge to monitor your messages constantly (emphasis mine–dw)”.  Now, suddenly, Newport understands what psychic forces are in play here.  This phenomenon, call it FOMO or operant conditioning or whatever, which is far stronger than most people realized before 2007 in the minds of those who are most susceptible to it (which is 90% of the population it seems), necessitates one solution and one solution only, if the desired outcome is some moderate measure of autonomy in this increasingly mechanized social environment–ditching the smartphone entirely.

Email Like a Robot

I reference in this essay the John Herrman article which appeared in the New York Times dated Nov. 7, 2018.

All you gmail users out there already know what I’m talking about.  Smart Reply, Smart Compose.  Smart Compose is the new one.  Say you type in “Have a look…” to one of your co-workers concerning your new widget design.  Smart Compose will offer a way to end the sentence:  “…and let me know what you think”, which appears in a fainter typescript symbolic of its tentative status.  They’re not imposing, just suggesting!  What could be wrong with that?  This technology is so helpful, and yet I feel like something’s being taken away from me…and then guilty for not being able to appreciate this wonderful helpfulness.

The AI in Smart Compose is learning how your individual mind works.  But at the same time it collectivizes thought.  If more people use a certain phrasing than another, the one used more will come up first as the pertinent suggestion.  This is especially efficacious in situations where learned performance trumps expression of will, as in work communications.  Smart Compose is best at exploiting ways we’ve already been programmed, by work, social convention, and the tools we use in the modern world to communicate.  The question of responsibility arises–are you responsible for text that is computer-generated automatically in your name?  One can call this human self-automation.  What fun!

Email thus becomes the backbone of a mosaic of components in which a general routinization of communicative strategies reaches for a predominant position in human thought.

This reminds me of Gustave Flaubert’s Dictionary of Received Ideas.  This is a little compendium of terms, and how people think about them, which was the inspiration for Flaubert’s novel  Bouvard and Pécuchet (1880), and predated it by as much as thirty years, which he conceived of as illustrating “the all-pervasive power of stupidity”, as A.J. Krailsheimer puts it in his Introduction to the Penguin Classics edition.  Example entries: AMBITION,  “Always ‘insane’ unless it is ‘noble'”.  ARISTOCRACY, “Despise and envy it.” BASES (OF SOCIETY), “Id est property, the family, religion, respect for authority.  Show anger if these are attacked.” BLACK: “Always followed by ‘as ebony’ or preceded by ‘jet.'”  C through Z is just as disheartening.  Routinized thought is as old as the hills.  But we are giving it a new dimension.

 

City of Surveillance

This is the title of an October 23, 2018 article to be found in The Guardian UK.  It refers to Toronto, Ontario, which is now negotiating with Sidewalk Labs, Google’s sister company, to implement the latest iteration of the “smart city”.  Toronto’s waterfront neighborhood Quayside is the gunea pig.  As Gabrielle Canon, author of the Guardian article, states, “Despite Justin Trudeau’s exclamation that, through a partnership with Google’sister company Sidewalk Labs, the waterfront neighborhood could help turn the area into a ‘thriving hub for innovation’, questions immediately arose over how the new wired town would collect and protect data.  A year into the project, those questions have resurfaced following the resignation of a privacy expert, Dr. Ann Cavoukian, who claimed she left her consulting role on the initiative to ‘send a strong statement’ about the data privacy issues the project still faces.”

Canon writes a bit further on in the article that “[a]fter initially being told that the data collected would be wiped and unidentifiable, Cavoukian told reporters she learned during a meeting last week that third parties could access identifiable information gathered in the district.”  For Cavoukian, this crossed the line.   She told the Global News, “When I heard that, I said: “I’m sorry.  I can’t support this.  I have to resign because you committed to embedding privacy by design into every aspect of your operation” [italics mine–dw].

Canon continues by citing other worrying examples of the shell game being utilized by Big Tech.  Saaida Muzaffar of TechGirlsCanada stepped down from its digital strategy advisory panel, announing that TechGirlsCanada was not adequately addressing privacy issues.

The response by the representatives of Big Tech were predictable.  Alyssa Harvey Dawson, Sidewalk Labs’ head of data governance, asserted that the Quayside project would “set a new model for responsible data used in cities.  The launch of Sidewalk Toronto sparked an active and healthy public discussion about data privacy, ownership, and governance.”  In a summary of the latest draft of the proposal, she recommends that the data collected in the city should be controlled and held by an independent civic data trust and that “all entities proposing to collect or use urban data (including Sidewalk Labs) will have to file a Responsible Data Impact Assessment with the Data Trust that is publicly available and reviewable.” But is this enough to ensure that privacy by design is embedded into every aspect of the system?  Jathan Sadowski, a lecturer on the ethics of technology, does not seem to be convinced of this.  He cautions that ..,[b]uilding the smart urban future cannot also mean paving the way for tech billionaires to fulfill their dreams of ruling over cities.  If it does, that’s not a future we should want to live in.”  In response to concerns of critics, Sidewalk Labs has issued a statement, saying “At yesterday’s meeting of Waterfront Toronto’s Digital Strategy Advisory Panel, it became clear that Sidewalk Labs would play a more limited role in near-term discussions about a data governance framework at Quayside.  Sidewalk Labs has committed to implement, as a company, the principles of Privacy by Design.  Though that question is settled, the question of whether other companies involved in the Quayside project would be required to do so is unlikely to be worked out soon and may be out of Sidewalk Labs’ hands.  For these reasons and others, Dr. Cavoukian has decided that it does not make sense to continue working as a paid consultant for Sidewalk Labs.  Sidewalk Labs benefited greatly from her advice, which helped the company formulate the strict privacy policies it has adopted, and looks forward to calling on her from time to time for her advice and feedback.”

I have quoted extensively from this article because I believe that this progression of events is likely going to constitute a paradigm for the advance of repressive technologies all over the world.  Initiatives will typically begin with a professed strong commitment to privacy.  Then little by little it will become clear that even though the original consortium may remain committed to these principles, it will not be able to guarantee that third parties, which always insert themselves somewhere along the line, will exercise that same commitment.

For her part, perhaps Dr. Cavoukian will now rethink the position she professes about the general principles of Privacy by Design.  She avers that privacy and the interests of large-scale business should not be viewed as zero sum, insisting that we can have strict privacy rules in place concomittant with an environment in which large-scale business practice proceeds without serious impediment.  The paradigm I have outlined here suggests otherwise.

See how it gathers, the technological storm, which is now upon us…

New Era Begins Wednesday

Freak-out at 2:18pm.  You can’t opt out.  It sets off a loud sound.  The first nationwide cellphone alert will occur at 2:18pm Eastern, tomorrow, Oct. 3, 2018.  This is only a test.  Had this been an actual emergency, scary looking men will comandeer your device, appear onscreen and scare the bejeezus out of you.  The intent is to use these alerts “only for advance warning of national crises” according to CBS News today.  “This is something that should not be used for a political agenda”, Jeh Johnson, former Secretary of Homeland Security warned.  They have the laws and protocols in place to avoid this, Johnson said.  “Should not be used?” The weakness of this assurance from a former high official in the United States Government is staggering.  And indeed, the reader may remember at this juncture what happened in Hawaii this past January, in which over a million people, virtually the entire population of Hawaii, received a seemingly official  alert on their phones that nuclear missiles were heading for them.  A member of the faculty of Columbia University, Andy Whitehouse, had the temerity to raise of couple of issues.  “The fact that you can’t turn this off, that it will be something that will arrive on your phone whether you like it or not, I think was perhaps upsetting and concerning to some people,” he is quoted as saying. I ask the reader to think about what these devices, and the technological society as a whole, are shaping up to be. More and more, elective uses are going to be supplemented by nonelective ones.  The seeds are being sown in our grand social engineering experiment.

“We’re Just Cold People”

The new memoir by Steve Jobs’ daughter, Lisa Brennan-Jobs, titled Small Fry.  The last vestiges of the thin veneer of counter-culture dissention from what used to be called “straight” culture are here stripped away.  It will be released to the general public on Sept. 4.  Excerpts have been leaked to Vanity Fair and other outlets.  The buzz now is about that already famous line, uttered by Lisa’s stepmother, Laurene Powell-Jobs, at a therapy session, when Lisa was a teenager, at which were present Steve, Laurene, and Lisa.  Ms. Brennan-Jobs cried at one point and says to the therapist she feels lonely and has wanted her parents to say good night to her.  Ms. Powell-Jobs responds to the therapist:  “We’re just cold people.”

I want to link this chilling tale with some words and ideas written by Joseph Weizenbaum, whose seminal work Computer Power and Human Reason sounded the alarm concerning a new sociopolitical dynamic way back in 1976.  In those days, there was a good deal of resistance and skepticism about the burgeoning computer technology on the part of the general public.  But it was only a surface phenomenon, as the following account shows.  It tells the tale of the creation of ELIZA, a computer program devised by Mr. Weizenbaum, which factored the response characteristics of a Rogerian psychotherapist, as I have related in these pages before.  In the article “Silicon Valley is not Your Friend”, Norm Cohen traces the early development of this trend:

“Interactions between people and their computers were always going to be confusing, and that confusion would be easy for programmers to exploit. John McCarthy, the computer-science pioneer who nurtured the first hackers at M.I.T. and later ran Stanford’s artificial intelligence lab, worried that programmers didn’t understand their responsibilities. ‘Computers will end up with the psychology that is convenient to their designers (and they’ll be fascist bastards if those designers don’t think twice),” he wrote in 1983. “Program designers have a tendency to think of the users as idiots who need to be controlled. They should rather think of their program as a servant, whose master, the user, should be able to control it.’

“Call it the Eliza problem. In 1966, Joseph Weizenbaum, a professor at M.I.T., unveiled a computer program, Eliza, which imitated a psychotherapist. It would, by rote, inquire about your feelings toward your parents or try to get you talking by rephrasing what you said in the form of a question. The program immediately touched a nerve, becoming a national phenomenon, to the surprise of Mr. Weizenbaum. For example, The New York Times swooned: “Computer is Being Taught to Understand English” . Eliza understood nothing, in truth, and could never reach any shared insight with a ‘patient.’ Eliza mechanically responded to whatever appeared on the screen. A typical therapy session quickly devolved into a Monty Python sketch. (Patient: You are not very aggressive, but I think you don’t want me to notice that. Eliza: What makes you think I am not very aggressive? Patient: You don’t argue with me. Eliza: Why do you think I don’t argue with you? Patient: You are afraid of me. Eliza: Does it please you to believe I am afraid of you?)

“Imagine Mr. Weizenbaum’s surprise when his secretary looked up from her computer and interrupted her exchanges with Eliza to say to him, ‘Would you mind leaving the room, please?’ She wanted privacy for a conversation with a machine! Mr. Weizenbaum, appalled, suddenly saw the potential for mischief by programmers who could manipulate computers and potentially the rest of us. He soon switched gears and devoted his remaining years to protesting what he considered the amorality of his computer science peers, frequently referring to his experiences as a young refugee from Nazi Germany.

One thought from the above passage stands out in a blinding flash of sickening realization:  Computers will end up with the characteristics of their designers as seen from a psychological standpoint.

Returning to the article by Norm Cohen:  “Neither Mr. Weizenbaum nor Mr. McCarthy mentioned, though it was hard to miss, that this ascendant generation were nearly all white men with a strong preference for people just like themselves. In a word, they were incorrigible, accustomed to total control of what appeared on their screens. ‘No playwright, no stage director, no emperor, however powerful,’ Mr. Weizenbaum wrote, ‘has ever exercised such absolute authority to arrange a stage or a field of battle and to command such unswervingly dutiful actors or troops.’”

Howe does such a mindset gain a foothold in the collective consciousness of society?  I believe it is instructive to point once again to Herbert Marcuse’s “Some Implications of Modern Technology” of 1941, which traces the course of a world-view that, developing wholly from within established rational frameworks, made it possible to pervert the rationalistically configured presuppositions animating the liberal autonomous individual into something approaching its opposite. I reproduce the relevant passage from this essay below:

“The principle of competitive efficiency favors the enterprises with the most highly mechanized and rationalized industrial equipment. Technological power tends to the concentration of economic power, to “large units of production, of vast corporate enterprises producing large quantities and often a striking variety of goods, of industrial empires owning and controlling materials, equipment, and processes from the extraction of raw materials to the distribution of finished products, of dominance over an entire industry by a small number of giant concerns. . . .” And technology “steadily increases the power at the command of giant concerns by creating new tools, processes and product. Efficiency here called for integral unification and simplification, for the removal of all “waste,” the avoidance of all detours, it called for radical coordination. A contradiction exists, however, between the profit incentive that keeps the apparatus moving and the rise of the standard of living which this same apparatus has made possible. “Since control of production is in the hands of enterprisers working for profit, thcy will have at their disposal whatever emerges as surplus after rent, interest, labor, and other costs are met. These costs will be kept at the lowest possible minimum as a matter of course.”‘ Under these circumstances, profitable employment of the apparatus dictates to a great extent the quantity, form and kind of commodities to be produced, and through this mode of production and distribution, the technological power of the apparatus affects the entire rationality of those whom it serves. Under the impact of this apparatus individualistic rationality has been transformed into technological rationality. It is by no means confined to the subjects and objects of large scale enterprises but characterizes the pervasive mode of thought and even the manifold forms of protest and rebellion [italics mine-dw]. This rationality establishes standards of judgment and fosters attitudes which make men ready to accept and even to introcept the dictates of the apparatus.”

To my mind, this last phrase in the last sentence quoted above is key–“attitudes which make men ready to accept and even to introcept the dictates of the apparatus”, coupled with the idea that computers will end up with the characteristics that are convenient to their designers, designers that have thoroughly introcepted what might be called scientistic presuppositions, ones which all but eliminate affect and privilege cold rationality, can only have an impact that replicates these features in the general populace. We are led down a pathway in this examination that leads to many others. One of them implied in the above quote is that traced by Jacques Ellul, whose The Technological Society of 1954 undertakes a deep examination of this drive for efficiency in the context of technological development. Efficiency is expressed in optimally rationalized action, and the implications for a mindset which privileges, nay reifies, efficiency over all other concerns are profound. Computers will end up with the characteristics of their designers, and from there, these characteristics are passed on to the end-user on a scale unseen in human history. A key point in Ellul’s critique is that this “technological imperative” has been allowed to achieve full autonomy, that all other concerns are subsumed into itself, which subordinates ends to means and is fully independent of any specific form of social organization, contrary to what Marcuse and, as well, by most leftist critics of the technocracy would have us believe. Technique, in this reading, follows a protocol that proceeds without regard to context: “The primary aspect of autonomy is perfectly expressed by Frederick Winslow Taylor, a leading technician. He takes, as his point of departure, the view that the industrial plant is a whole in itself, a ‘closed organism’, an end in itself…the complete separation of the goal from the mechanism, the limitation of the problem to the means, and the refusal to interfere in any way with efficiency, all this is clearly expressed by Taylor and lies at the basis of technical automony.”

Considering the totalizing nature of this phenomenon, it is easy to see how the cohort which invests in it could grow almost without limit, given the rise of the new type of personality so memorably voiced by Ms. Laurene Powell-Jobs in the quote cited above.  If one accepts the basic argument Marcuse puts forward in “Some Implications of Modern Technology”, especially that this technological rationality “make[s] men ready to accept and even to introcept the dictates of the apparatus”, one begins to grasp the enormity of the problem.   Ellul and Marcuse convincingly demonstrate that this technological rationality operates at the level of the deeply held philosophical conviction.  This is nothing short of an ideological imperative, an ethos of living, which dovetails into the mechanistic processes of “persuasive design” to encourage the rise of “the cold people” and marginalize the rest of us.