Skip to Content
For those who aren’t aware there is a lot of talk again about the coming arrival of machine intelligence as a singularity point. The idea has enough currency to actually just be referred to as “The Singularity”. To my mind there always seems to be a lot of underlying dread in the coming event, even and maybe especially in the voices of the most optimistic, that this could foretell the end of the human race. Think Terminator or preferably War Games and you get how that could happen.

I saw Jaron Lanier speak last week at PARC. Among the many things he said (more later) one is relevant to the notion of the coming singularity. He posited that he believes we could actually be facing an anti-singularity in which things get so complicated and unworkable that artificial intelligence can never emerge. I don’t recall if part of his notion is that this could also destroy us as well, but to my mind this scenario carried to completion in the form of all IT infrastructure grinding to a halt certainly seems much more certain to be a bad development for the human race than a self aware AI system.

Personally I don’t believe in the singularity concept. Not in the sense that we will never achieve AI but I do not believe it will awake as a separate consciousness perceptible to us. That pretends that we can actually define our own consciousness in a way that is measurable much less an alien consciousness such as that could appear in a machine. Even awake is a debatable concept directly related to consciousness, read some Ouspensky or Gurdjieff to lean more on why I believe this.

I do believe that there is inseparable co-dependence between humans and machines whether they be intelligent or not. So things will continue, machines will get smarter, we will get smarter, and our ancestors will be very different from us. But I do not believe there will be a single point, ala the singularity, that will allow us to say that is when it happened.

To report this post you need to login first.

3 Comments

You must be Logged on to comment or reply to a post.

  1. Mark Finnern
    Bummer I mist the Lanier talk, I would have loved to hear him.

    You dismiss the concept of Singularity by reducing it to AI that “awakes as a separate consciousness perceptible to us”

    That is not what Vernor Vinge is defining as Singularity : “It is a point where our old models must be discarded and a new reality rules.”

    What is needed is “.. the imminent creation by technology of entities with greater than human intelligence …”

    He goes on to describe several ways to get there: Number 4 is “Biological science may provide means to improve natural human intellect.”

    Even if, and actually I hope it is within the natural human intellect, we are still on this curve of ever accelerated change and there will be a point, where the old rules have to be discarded.

    Will we be able to point to a day and say here it happened? Don’t know.

    (0) 
  2. I really reject the concept of the singularity because I object to the idea that you can refer to a single point in time as representative of anything. I prefer for example the phrasing that Hunter S. Thompson used to explain that “San Francisco in the middle 1960’s was a very special place.” Perhaps someday we will look back and say “in the early 2020’s it seemed a new AI came online every day.”
    So I do believe in many of the ideas related to the singularity concept, especially the curve of accelerated change, but I think the term itself is wrong. I don’t think there will be one moment when our old models fail. Building a new reality is a process from moment to moment. Was the creation of the car a singularity event? It was not until decades later that it fundamentally changed everything through the processes that emerged due to its acceptance. I think some of the coming technologies related to AI will change things around us in a similar way. The old models will fail and a new reality will emerge consensually over many years with no certain origin event.

    I will go on record to say that I do not believe that we will create an entity with greater than human intelligence. I believe it will be different and that human, and indeed any, intelligence is immeasurable. I also believe that identifying these entities will be exceedingly difficult due to our inadequate understanding of what consciousness is.

    (0) 

Leave a Reply