I saw Jaron Lanier speak last week at PARC. Among the many things he said (more later) one is relevant to the notion of the coming singularity. He posited that he believes we could actually be facing an anti-singularity in which things get so complicated and unworkable that artificial intelligence can never emerge. I don’t recall if part of his notion is that this could also destroy us as well, but to my mind this scenario carried to completion in the form of all IT infrastructure grinding to a halt certainly seems much more certain to be a bad development for the human race than a self aware AI system.
Personally I don’t believe in the singularity concept. Not in the sense that we will never achieve AI but I do not believe it will awake as a separate consciousness perceptible to us. That pretends that we can actually define our own consciousness in a way that is measurable much less an alien consciousness such as that could appear in a machine. Even awake is a debatable concept directly related to consciousness, read some Ouspensky or Gurdjieff to lean more on why I believe this.
I do believe that there is inseparable co-dependence between humans and machines whether they be intelligent or not. So things will continue, machines will get smarter, we will get smarter, and our ancestors will be very different from us. But I do not believe there will be a single point, ala the singularity, that will allow us to say that is when it happened.