“We’re Just Cold People”

The new memoir by Steve Jobs’ daughter, Lisa Brennan-Jobs, titled Small Fry.  The last vestiges of the thin veneer of counter-culture dissention from what used to be called “straight” culture are here stripped away.  It will be released to the general public on Sept. 4.  Excerpts have been leaked to Vanity Fair and other outlets.  The buzz now is about that already famous line, uttered by Lisa’s stepmother, Laurene Powell-Jobs, at a therapy session, when Lisa was a teenager, at which were present Steve, Laurene, and Lisa.  Ms. Brennan-Jobs cried at one point and says to the therapist she feels lonely and has wanted her parents to say good night to her.  Ms. Powell-Jobs responds to the therapist:  “We’re just cold people.”

I want to link this chilling tale with some words and ideas written by Joseph Weizenbaum, whose seminal work Computer Power and Human Reason sounded the alarm concerning a new sociopolitical dynamic way back in 1976.  In those days, there was a good deal of resistance and skepticism about the burgeoning computer technology on the part of the general public.  But it was only a surface phenomenon, as the following account shows.  It tells the tale of the creation of ELIZA, a computer program devised by Mr. Weizenbaum, which factored the response characteristics of a Rogerian psychotherapist, as I have related in these pages before.  In the article “Silicon Valley is not Your Friend”, Norm Cohen traces the early development of this trend:

“Interactions between people and their computers were always going to be confusing, and that confusion would be easy for programmers to exploit. John McCarthy, the computer-science pioneer who nurtured the first hackers at M.I.T. and later ran Stanford’s artificial intelligence lab, worried that programmers didn’t understand their responsibilities. ‘Computers will end up with the psychology that is convenient to their designers (and they’ll be fascist bastards if those designers don’t think twice),” he wrote in 1983. “Program designers have a tendency to think of the users as idiots who need to be controlled. They should rather think of their program as a servant, whose master, the user, should be able to control it.’

“Call it the Eliza problem. In 1966, Joseph Weizenbaum, a professor at M.I.T., unveiled a computer program, Eliza, which imitated a psychotherapist. It would, by rote, inquire about your feelings toward your parents or try to get you talking by rephrasing what you said in the form of a question. The program immediately touched a nerve, becoming a national phenomenon, to the surprise of Mr. Weizenbaum. For example, The New York Times swooned: “Computer is Being Taught to Understand English” . Eliza understood nothing, in truth, and could never reach any shared insight with a ‘patient.’ Eliza mechanically responded to whatever appeared on the screen. A typical therapy session quickly devolved into a Monty Python sketch. (Patient: You are not very aggressive, but I think you don’t want me to notice that. Eliza: What makes you think I am not very aggressive? Patient: You don’t argue with me. Eliza: Why do you think I don’t argue with you? Patient: You are afraid of me. Eliza: Does it please you to believe I am afraid of you?)

“Imagine Mr. Weizenbaum’s surprise when his secretary looked up from her computer and interrupted her exchanges with Eliza to say to him, ‘Would you mind leaving the room, please?’ She wanted privacy for a conversation with a machine! Mr. Weizenbaum, appalled, suddenly saw the potential for mischief by programmers who could manipulate computers and potentially the rest of us. He soon switched gears and devoted his remaining years to protesting what he considered the amorality of his computer science peers, frequently referring to his experiences as a young refugee from Nazi Germany.

One thought from the above passage stands out in a blinding flash of sickening realization:  Computers will end up with the characteristics of their designers as seen from a psychological standpoint.

Returning to the article by Norm Cohen:  “Neither Mr. Weizenbaum nor Mr. McCarthy mentioned, though it was hard to miss, that this ascendant generation were nearly all white men with a strong preference for people just like themselves. In a word, they were incorrigible, accustomed to total control of what appeared on their screens. ‘No playwright, no stage director, no emperor, however powerful,’ Mr. Weizenbaum wrote, ‘has ever exercised such absolute authority to arrange a stage or a field of battle and to command such unswervingly dutiful actors or troops.’”

Howe does such a mindset gain a foothold in the collective consciousness of society?  I believe it is instructive to point once again to Herbert Marcuse’s “Some Implications of Modern Technology” of 1941, which traces the course of a world-view that, developing wholly from within established rational frameworks, made it possible to pervert the rationalistically configured presuppositions animating the liberal autonomous individual into something approaching its opposite. I reproduce the relevant passage from this essay below:

“The principle of competitive efficiency favors the enterprises with the most highly mechanized and rationalized industrial equipment. Technological power tends to the concentration of economic power, to “large units of production, of vast corporate enterprises producing large quantities and often a striking variety of goods, of industrial empires owning and controlling materials, equipment, and processes from the extraction of raw materials to the distribution of finished products, of dominance over an entire industry by a small number of giant concerns. . . .” And technology “steadily increases the power at the command of giant concerns by creating new tools, processes and product. Efficiency here called for integral unification and simplification, for the removal of all “waste,” the avoidance of all detours, it called for radical coordination. A contradiction exists, however, between the profit incentive that keeps the apparatus moving and the rise of the standard of living which this same apparatus has made possible. “Since control of production is in the hands of enterprisers working for profit, thcy will have at their disposal whatever emerges as surplus after rent, interest, labor, and other costs are met. These costs will be kept at the lowest possible minimum as a matter of course.”‘ Under these circumstances, profitable employment of the apparatus dictates to a great extent the quantity, form and kind of commodities to be produced, and through this mode of production and distribution, the technological power of the apparatus affects the entire rationality of those whom it serves. Under the impact of this apparatus individualistic rationality has been transformed into technological rationality. It is by no means confined to the subjects and objects of large scale enterprises but characterizes the pervasive mode of thought and even the manifold forms of protest and rebellion [italics mine-dw]. This rationality establishes standards of judgment and fosters attitudes which make men ready to accept and even to introcept the dictates of the apparatus.”

To my mind, this last phrase in the last sentence quoted above is key–“attitudes which make men ready to accept and even to introcept the dictates of the apparatus”, coupled with the idea that computers will end up with the characteristics that are convenient to their designers, designers that have thoroughly introcepted what might be called scientistic presuppositions, ones which all but eliminate affect and privilege cold rationality, can only have an impact that replicates these features in the general populace. We are led down a pathway in this examination that leads to many others. One of them implied in the above quote is that traced by Jacques Ellul, whose The Technological Society of 1954 undertakes a deep examination of this drive for efficiency in the context of technological development. Efficiency is expressed in optimally rationalized action, and the implications for a mindset which privileges, nay reifies, efficiency over all other concerns are profound. Computers will end up with the characteristics of their designers, and from there, these characteristics are passed on to the end-user on a scale unseen in human history. A key point in Ellul’s critique is that this “technological imperative” has been allowed to achieve full autonomy, that all other concerns are subsumed into itself, which subordinates ends to means and is fully independent of any specific form of social organization, contrary to what Marcuse and, as well, by most leftist critics of the technocracy would have us believe. Technique, in this reading, follows a protocol that proceeds without regard to context: “The primary aspect of autonomy is perfectly expressed by Frederick Winslow Taylor, a leading technician. He takes, as his point of departure, the view that the industrial plant is a whole in itself, a ‘closed organism’, an end in itself…the complete separation of the goal from the mechanism, the limitation of the problem to the means, and the refusal to interfere in any way with efficiency, all this is clearly expressed by Taylor and lies at the basis of technical automony.”

Considering the totalizing nature of this phenomenon, it is easy to see how the cohort which invests in it could grow almost without limit, given the rise of the new type of personality so memorably voiced by Ms. Laurene Powell-Jobs in the quote cited above.  If one accepts the basic argument Marcuse puts forward in “Some Implications of Modern Technology”, especially that this technological rationality “make[s] men ready to accept and even to introcept the dictates of the apparatus”, one begins to grasp the enormity of the problem.   Ellul and Marcuse convincingly demonstrate that this technological rationality operates at the level of the deeply held philosophical conviction.  This is nothing short of an ideological imperative, an ethos of living, which dovetails into the mechanistic processes of “persuasive design” to encourage the rise of “the cold people” and marginalize the rest of us.

Deception by Design

I will be referring here to a study undertaken by the Norwegian Consumer Council or Forbruker Radet.  We are speaking here of the subtle and not-so-subtle techniques that the big gatekeepers (Google, Microsoft, Facebook) employ to “nudge” users into giving up more privacy than they otherwise might.  What I really want to point the reader towards, however, are the wider implications of this engineering, beyond privacy questions strictly speaking, into the Marcusean realms of compliant efficiency and technological rationality, which I have discussed in some depth in previous essays.  The subheading to this study, titled “Deceived by Design”, “How tech companies use dark patterns to discourage us from exercising our rights to privacy”, employs the neologism “dark patterns”, a term only 8 years old, and can be defined as a user interface that has been carefully crafted to trick users into doing various things not necessarily in their best interest. This employment of deception stems from the monetization of data.  From the report:

“Because many digital service providers make their money from the
accumulation of data, they have a strong incentive to make users share as much
information as possible. On the other hand, users may not have knowledge
about how their personal data is used, and what consequences this data
collection could have over time. If a service provider wants to collect as much
personal data as possible, and the user cannot see the consequences of this
data collection, there is an information asymmetry, in which the service provider
holds considerable power over the user.”

One of the most salient techniques is “nudging”.  Again, from the report:

“The concept of nudging comes from the fields of behavioural economy and
psychology, and describes how users can be led toward making certain choices
by appealing to psychological biases. Rather than making decisions based on
rationality, individuals have a tendency to be influenced by a variety of cognitive
biases, often without being aware of it.  For example, individuals have a
tendency to choose smaller short-term rewards, rather than larger long-term gains (hyberbolic discounting), and prefer choices and information that confirm
our pre-existing beliefs (confirmation bias). Interface designers who are aware of these biases can use this knowledge to effectively nudge users into making particular choices.  In digital services, design of user interfaces is in many ways even more important than the words used.  The psychology behind nudging can also be used exploitatively, to direct the user to actions that benefit the service provider, but which may not be in the user’s interests. This can be done in various ways, such as by obscuring the full price of a product, using confusing language, or by switching the placement of certain functions contrary to user expectations…[T]he information asymmetry in many digital services becomes particularly large because most users cannot accurately ascertain the risks of
exposing their privacy.”

In the case of privacy settings, the basic fact is that most users do not go into the menus of Amazon, Google, or Facebook to change what is set up by default.  This can only be because there is an implicit trust in the service provider on the part of the user.   This trust is abrogated as a matter of course.  Time and again the service providers show their true colors by making it difficult for the user to maximize privacy.  The study offers the follolwing example:

“The Facebook GDPR [General Data Protection Regulation] popup requires users to
go into “Manage data settings” to turn off ads based on data from third parties.
If the user simply clicks ‘Accept and continue’, the setting is automatically
turned on.  This is not privacy by default…Facebook and Google both have default settings preselected to the least privacy friendly options.”

I think that I have conveyed the general idea of the problem.  While this study does not reveal any techniques that have not been previously known, to grasp just how it all works in the macro sense is something that encounters considerable resistance on the part of the average user.

Franklin Foer, in his review of the latest Jaron Lanier book Ten Reasons to Delete Your Social Media Accounts Immediately, makes reference to the “Facebook manipulation machine.”  The practices outlined in the Norwegian study make it clear that this manipulation machine is not limited to Facebook.  It is ingrained in the protocols that one encounters nearly every step of the way as one navigates the internet. As such, it functions as as a giant behavior modification apparatus, acclimating the user to a new cognitive paradigm, one in which user autonomy is progressively degraded, arguably on the way to its eventual elimination. I ask the reader of this article to keep in mind that this manipulation machine is in its nascent stages.  Look how far it has come in just the eight years since the term “dark patterns” was coined.  What will the state of affairs be in ten, twenty, a hundred more years?

 

 

The #Plane Bae saga

Rosey Blair’s opening of the portals of untruth, as in “the crowd is untruth”, underscores once again the perils and wretchedness of ubiquitous surveillance (often not carried out by governments or faceless megacorporations but ordinary citizens), coupled with the unwholesome tendencies toward exploitative voyeurism so many find unproblematic.

The online thread, which went viral of course, a twitter feed, documented without permission the so-called “budding romance” between two people getting to know each other on an airplane, for those of you not familiar with the story.

One article on this subject quotes someone named Taylor Lorenz, an internet culture reporter for the Atlantic.

“That’s one of the biggest problems with social media: It allows you to exploit the world and people around you to get attention for your own benefit,” Lorenz told the online periodical The Business Insider. “Everyone is just using each other for content. It encourages you to look at the world that way.”

For me this quote summarizes the problem we are now faced with as seen in its broad outlines.  It’s far more significant than that some one party got exploited one time.  One has to assume that everything one does in public can be treated in this way.  The moral:  Look behind you in the plane cabin, in the theatre, walking down the street, to see if anyone is spying on you with their smartphones.  Don’t say anything that the masses will pick up on to your detriment.  Know your enemy.  One must now contend with yet another layer of the intrusion of the herd mentality into lives this herd does not respect, which operates on the principle that 60,000 romance voyeurs can’t be wrong.  It’s “merely” a matter of magnification of tendencies that have been there for millennia of course.  But now the world is Peyton Place.  Everyone is now using each other for content, to exploit the world to get attention for one’s own benefit.  Do you really want to play this game?

10 Arguments for Deleting Your Social Media Accounts Right Now

By Jaron Lanier.  Review by Franklin Foer, NYT Book Review, Sunday, July 1, 2018

From the review:

“He [Lanier] worries that our reliance on big tech companies is ruining our capacity for spirituality, by turning us into robotic extensions of their machines. The companies, he argues, have no appreciation for the ‘mystical spark inside you.’ They don’t understand the magic of human consciousness and, therefore, will recklessly destroy it.”

Yes, the “mystical spark”. It’s true, the brain trusts of the big social media companies don’t understand this aspect of human existence. They are zealots in the School of Behaviorism, in which the will to objectivity has reduced the status of the inner experience to that of a mere footnote. This polemic that is instantiated today as the quarrel between Behaviorism and the advocates of the existence of the “mystical spark” stems from the great quarrel between science and the humanities, ultimately between science and poetry, which stretches back into ancient times.  Plato is famous for coming down squarely against the poets and for the sciences concerning this crucial question.  “And the rest is history” as they say.  The Enlightenment had recalled the Greek ideal and Reason as a result becomes the new god, because it has been reified, it has become the Supreme Court which functions as the court of last appeal.  One can’t put reasoning at the service of the passions without a return to the triumph of irrational chaos, so the reasoning goes.  But the law of unintended consequences is always lurking to find the flaw in the given conception.  Since the time of Condorcet we have been grappling with this enormous question, and one then wishes, realizing the enormous stakes invlolved, to look back at the viewpoints gathered under the heading of the Sturm und Drang, or “Counter-enlightenment”, which indicts Reason as the reified master myth of a death-oriented civilization.

The following passage in Foer’s review of Lanier’s book highlights the sorry state of affairs we have arrived at in our society in our “liberal” regime of tolerance:

“Critics of the big technology companies have refrained from hectoring users to quit social media. It’s far more comfortable to slam a corporate leviathan than it is to shame your aunt or high school pals — or, for that matter, to jettison your own long list of “friends.” As our informational ecosystem has been rubbished, we have placed very little onus on the more than two billion users of Facebook and Twitter. So I’m grateful to Jaron Lanier for redistributing blame on the lumpen-user, for pressing the public to flee social media. He writes, ‘If you’re not part of the solution, there will be no solution.'”

Which calls to mind a passsage in a book by Matthew B. Crawford, The World Beyond Your Head (2015):

“We abstain on principle from condemning activities that leave one compromised and degraded, because we fear that disapproval of the activity would be paternalistic toward those who engage in it.  We also abstain from affirming some substantive picture of human flourishing, for fear of impoising our values on others.  This gives us a pleasant feeling:  we have succeeded in not being paternalistic or presumptuous.  The prioity we give for avoiding these vices in particular is rooted in our respect for persons, understood as autonomous.  “People should be allowed to make their own decisions.”  Liberal agnosticism about the good life has some compelling historical reasons behind it.  It is a mind-set that we consciously cultivated as an antidote to the religious wars of centuries ago, when people slaughtered one another over ultimate differences.  After WWII, revulsion with totalitarian regimes of the right and left made us redouble our liberal commitment to neutrality.  But this stance is maladaptive in the context of 21st century capitalism…the everyday threats to your well-being no longer come from an ideological rival or a theological threat to the liberal secular order.  They are native to that order.”

Foer says at one point in his review that generally speaking, critics have avoided calling out the average “lumpen-user” as complicit in Facebook et al’s “manipulation machine”. But this is inevitable in an ideological framework which is organized around the principle of “vox populi est vox dei” and the liberal tolerance stemming therefrom, the basic principles of which I sketched above.  If the numbers are there, the people have spoken and that is the final word. But truth cannot be decided by numbers, and the democratic experiment implodes as a result of this realization. The “lumpen-user”.  I like that phrasing. One imagines a mechanism wherein the “lumpen-user” is made progressively more “lumpen” by increasingly greater enmeshment in social media, television, and internet use in general. And this is just the beginning. At such a nodal point I always think of a scene in the movie “The President’s Analyst”, where the head of TPC (The Phone Company) outlines plans to the protagonist of installing transistors right inside the brain so you can make and receive calls just by thinking.  Sherry Turkle assures me that the best minds at MIT are working hard at implementing just such a system now. Who wants to have that little tablet in one’s hand all the time when it can all be right inside you?

The overall point must be stressed: We must speak out now against participation in this soul-draining mechanism.  Of course speaking out in some inchoate way is not enough.  We must develop the sensibility that is adequate to countering this master-myth of reified Reason.  Reason has betrayed the mind; what is the way out of this labyrinth? That is my project, some of which has been outlined here beginning this past January.

If it takes “shaming” the lumpen-user into realizing the stakes, then so be it. To me, shaming isn’t necessary, if one can outline the advantages of rejection of this ideologically-based mechanism.  This grand initiative, bequeathal of Condorcet and Comte, now takes on its true form as the greatest enemy of the human spirit that has as yet materialized.  Victims of technological rationalism unite!

 

 

Butler’s Erewhon

Samuel Butler’s Erewhon was first published in 1872. (In other words, the seventy-third year of the 19th century.)  The following excerpt is from the Modern Library edition of 1927, pp. 227-29.

“…[t]he writer went on to say that he anticipated a time when it would be possible, by examining a single hair with a powerful microscope, to know whether its owner could be insulted with impunity.  He then became more and more obscure, so that I was obliged to give up all attempts at translation; neither did I follow the drift of his argument.  On coming to the next part which I could construe, I found that he had changed his ground.

“‘Either’, he proceeds, ‘a great deal of action that has been called purely mechanical and unconscious must be admitted to contain more elements of consciousness than has been allowed hitherto (and in this case germs of consciousness will be found in many actions of the higher machines)–or (assuming the theory of evolution but at the same time denying the consciousness of vegetable and crystalline action) the race of man has descended from things which had no consciousness at all.  In this case there is no a priori improbability in the descent of conscious ( and more than conscious) machines from those which now exist, except that which is suggested by the apparent absence of anyting like a reproductive system in the mechanical kingdom.  This absence however is only apparent, as I shall presently show.

“‘Do not let me be misunderstood as living in fear of any existing machine; there is probably no known machine which is more than a prototype of future mechanical life.  The present machines are to the future as the early Saurians to man.  The largtest of them will prpbably greatly diminsh in size.  Some of the lowest vertebrata attained a much greater bulk than has descended to their more highly organized representatives, and in like manner a diminution in the sixe of machines has often attended their development and progress.

“‘Take the watch, for example; examine its beautiful structure; observe the intelligent play of the minute members which compose it; yet this little creature is but a development of the cumbrous clocks that preceded it; it is no deterioration from them.  A day may come when clocks, which certainly at the present time are not diminishing in bulk, will be superceded owing to the universal use of watches, in which case they will become as extinct as ichthyosauri, while the watch, whose tendency has for some years been to decrease in size rather then the contrary, will remain the only existing type of an extinct race.

“‘But returning to the argument, I would repeat that I fear none of the existing machines; what I fear is the extraordinary rapidity with which they are becoming something far different to what they are at present.  No class of beings have in any time past made so rapid a movement forward.  Should not that movement be jealously watched, and checked while we can still check it?  And is it not necessary for this end to destroy the more advanced of the machines which are in use at present, though it is admitted that they are in themselves harmless?'”

The Smartphone’s Effect on the Brain and Marcuse’s ‘Technological Rationality’

I reference a March 10, 2018 article in the Business Insider in this article.  It is titled “This is what your smartphone is doing to your brain–and it isn’t good”.  Ultimately, one must move away from the positivistic paradigm that is all but ubiquitous now, with its bias towards the quantifiable.  This will to obectivity is the real problem with the Western drive for knowledge, and statistics forms the basis for its raison d’etre.  Nevertheless I am going to cite a telling statistic from this article:  An alarming statistic.  One that should make one rouse oneself from one’s slumber and ask with urgency “What is happening to me?”  But the system itself, as it effects deeper and deeper inroads into the psyche, militates against such realizations.

What force in the human soul can be held responsible for this erosion of selfhood?  To address this question, I make recourse to the thesis of Siegfried Kracauer, as propounded in his book From Caligari to Hitler: A Psychological Study of Film (1947), which posits that a mythic figure in the run-up to National Socialism’s triumph in Germany, as embodied in a large cohort of the populace, reflected in the popular films of the day, the Somnambulist, a being without personal will, doing the bidding of a slavemaster, was the eminence grise behind Hitler’s rise to power.  Caligari, from the 1919 film The Cabinet of Dr. Caligari, the puppetmaster who decides who is sane and who is not but is himself insane, finds his patsy in the Somnambulist, who, under hypnosis, goes out to rid Caligari of his enemies by murdering them.  Kracauer borrows a page from the philosopher Martin Heidegger, who believed that most people lead what he called an “inauthentic” existence, estranged from self and only going throught the motions of lived experience. Something deep in the soul predisposes the “average person” toward manipulation by outside forces, evidence of a split in the personality which denigrates the present moment and privileges the Beyond.  In this state of soul the individual gravitates unknowingly towards oblivion while at the same time believing he or she is a fully autonomous being.

Now I return to the statistic I referred to at the beginning of this essay:  86% of Americans with smartphones (which is almost everyone above 10 years old) say they check their phones “constantly.”  And for most it is not a pleasant experience.  According to the writer of the article, this cohort is seriously “stressed out”.

But this state of affairs didn’t just pop into being in 2007 with the introduction of the smartphone into American society.  Certain salient implications of our burgeoning technology were early recognized, 77 years ago, by Herbert Marcuse, whose notion of “technological rationality” appears as a doleful harbinger of our time and the time after our time.  Here is a quote from Jeffrey Ocay’s article “Technology, Technological Domination and the Great Refusal” on the phenomenon of technological rationality:

“For Marcuse, technological rationality refers primarily to the assigning of mental powers to the apparatus that calls for unconditional compliance and coordination.  In
other words, technological rationality means the subordination of thoughts to
the machine process so that it is no longer the individual that directs the
machine but the other way around.

“According to Marcuse, technological rationality arises when, in the
medium of technology, culture, politics, and the increasing power of the
economic system merge into an omnipresent system which swallows up or
repulses all alternatives.  This eventually ‘extends to all spheres of private and
public existence,’ integrating all authentic opposition and absorbing all
alternatives.  In this way, technology, which is originally an external power
over nature, has been internalized by the individuals. Here, reason has lost its
meaning because the thoughts, feelings, and actions of men are shaped by the
technical requirements of the apparatus which demands compliance and
adjustment.  Thus, the human psyche is transformed into mere biological
impulses which make the individual a passive agent of production as well as
reduce the individual into mere spectator who adjusts to the technical
processes of production. Consequently, technological rationality dissolves
critical thinking and replaces it with the idea of compliant efficiency, which
results in the individual’s submission to the apparatus without any form of
mental and physical opposition.”

One grants at the outset that this technology does have undeniable benefits.  But it’s becoming increasingly clear that these benefits function as a shield, causing the mind sufficiently immersed to downplay what drawbacks there may be.  Marcuse shows that this progression towards the reduction of the mind to a mere mechanism, an appendage  of the technological apparatus, can’t be successfully addressed by recourse to such tepid techniques as advocated by, for example, the Center for Humane Technology.  It is a classic case of too little, too late.   Only by fully rejecting technological rationality’s hold on the psyche can the downward spiral towards personal dissolution be stopped.  The Zone (see Natasha Dow Schüll’s Addiction by Design for an exhaustive discussion of the psychodynamics of The Zone) is a fearsome place.  The global “subordination of thoughts to the machine process” is too high a price to pay for this technology’s benefits!

The Drive for Knowledge

In the late 19th century a peculiar tendency began to see the light of day.  One may call it the critique of truth.  It is summarized in One of Friedrich Nietzsche’s early essays, “On Truth and Lies in an Extramoral sense”: “Considered as an unconditional duty, truth stands in a hostile and destructive relationship to the world.”  To assert that such a statement is provocative would be the understatement of the century.    It cuts to the very core of the modern Western enterprise.  It asks a very deep question indeed:  What is the significance of the quest for truth?  One could argue that this quest can only give birth to a culture that is characterized by a fundamental disunity, in that no distinction is made between the great and the small, what is better and what is worse, what is conducive to integration that is based in human values and what is not.  As such the scientific drive for truth aims at utter nihilism.  Such is the conundrum:  Either pluralism reigns with its tendency for cultural disunity or the intrinsic hierarchies are allowed to coalesce and hard authoritarianism is the likely outcome, at least without some insight into what could constitute some alternative to our mechanistic concepts of the necessity of coercion, as embodied in the concept of Human Law. In liberal democracy, many people aspire to become something more than an expresion of the values of mass culture.  But the ultimate authority granted to those ideas which have the highest number of adherents without regard for the coherence or nobility of these ideas, thus becoming complicit with the scientific truth drive in its disregard to the difference between great and small.  One example of how this truth drive fragments our integrity is the electronic, internet-connected baby monitors that have come on the market recently.  Mother uses her smartphone to remotely monitor her baby at home.  This is a seemingly helpful and benign aid to mother’s peace of mind at first look but many firsthand accounts of how this tool actually works reveal a far different reality.  Alerts come to the mother’s attention with the slightest change in breathing, a bit of fussing that 99% of the time is normal baby behavior, etc.  Companies are in fact exploiting irrational insecurites of the parents to generate data for profit.  Is one really better off with this sharper picture of the baby’s behavior?  This, in addition to aiding and abetting the incessant culture of tracking in general with its implications of the erosion of human autonomy and privacy.  I now return to the potential distinction between the small and the great that the will to absolute truth undercuts.  The quandary deepens every day:  We have no recourse to human values in a system that elevates the motley to the ultimate position of authority.  Where the distinction between small and great is not recognized, no culture can survive.  And so it becomes the signal goal of the exit from the nightmare of history to determine what is great in the soul, a people, a culture.  This necessitates a willingness to forego the attainment of certain kinds of truth, in my example, the “truth” of constantly knowing what is happening in the baby’s crib when more intermittent monitoring would actually yield a more realistic evaluation of baby’s needs.  Illusion trumps truth in creating a better spiritual state of being for all concerned.  Of course, this principle extends to all tracking, entrainment, enmeshment in mechanistic cultural entities, the Internet of Things perhaps being the ultimate expression thereof.  One will know all things happening at all times.  Is this conducive to human flourishing?  The scary thing is that this entity can become the unifying element of culture at the level of the motley.  What is great is defined as the incoherent.  This is institutionalized madness.