Singularity University

Singularity University is the first major academic effort to study the acceleration of technological change. many believe will lead to the most profound changes the world has ever seen, first in the form of conscious computing and then perhaps as an explosion of change that will transform all of humanity.

Sound incredible?  It will be which is why NASA, Google, and a host of interesting folks are all involved in the project which will be based at NASA Ames in Silicon Valley.

More details are in the Singularity University Press Release

21 thoughts on “Singularity University

  1. Teh SINGOOLARITEE sheez’a comin’ winter solstice 2012!

    As always, Vernor Vinge’s thoughts on the singularity are very well thought out. I think even he underestimates the power of the little molecular machines busy inside the brain. Computer people like to think about neurons because they think maybe they’re like big squishy transistors.

    Weirdly enough, I once ran a computer simulation that predicted a non-Singularity “everything is going in the recycler” event close to the end of 2012. When the Ruskies folded up the USSR and the Chinese became crypto-capitalists I started to question that simulation, though.

  2. One of Kurzweil’s interesting points is that even with all the digital convergence, organic thought is more along the lines of analog signals. Blue Brain handles this – to the extent I understand it – by integrating non-digital inputs into their neuron simulation systems. SyNapse, which is now much better funded and thus more likely to yield early results, is fully algorithmic.

    But Tommo my feeling is that many computer people share your aversion to simplifying brain activity to “a huge number of mostly redundant neocortical interconnections” even though I still see no reason to think thinking is much of anything more than that … there … thinking … thingie.

  3. A short story of Computer Consciousness:

    Puter to peeples: OK, I’m conscious now!

    Peeple: No you are NOT!

    Puter: Yes, I am you twits!

    Peeples say Turing fails, back to the drawing board.

  4. Personally, I don’t like Kurzweil’s line of thinking too much, although I should admit that I haven’t looked into it too deeply — my initial reaction was that it was too naive to be of much service. Vinge speaks to me more than Kurzweil or any of the “gee whiz” AI fabulists.

    My position remains what it has always been: the brain is a device that operates on a molecular level, with staggering amounts of information flow. I have even seen suggestions that the brain operates on a subatomic level, but my knowledge of neuroscience doesn’t go that far. Even the term “neuroscience” seems naive to me now that we are taking a deeper look at the chemical processes that make up e.g. memory, which are associated with neuron activity but add whole dimensions of complexity to the problem.

    So… give me the computational capacity, and as Vinge notes, even more importantly the algorithms and expertise to build a brain simulation, and I’ll bet you bonus Quatloos that I can cook up something a lot more interesting than a cheap knockoff of a 2-year-old.

  5. Actually, the “algorithms and expertise” are the critically important part of the problem. Even at the current state of the art, the hardware’s capabilities are embarrassingly beyond the state of software practice.

  6. “””””…… many computer people share your aversion to simplifying brain activity to “a huge number of mostly redundant neocortical interconnections” even though I still see no reason to think thinking is much of anything more than that … there … thinking … thingie.“””””

    AI did not succeed—at least to the degree that the AI business thought it would. That seems to offer some evidence that a purely functional model of mind will not suffice. While cog-sci types might eventually simulate a human brain using neuro-ware or something, that would not establish that consciousness is merely I/O or a sophisticated bio-CPU. For one, the human mind accumulates knowledge in a unique manner–it’s not just stimulus response ala Pavlov. The learning mechanisms might be termed holistic in a sense (though that’s not to suggest a Cartesian ego, or anything mystic…….).

    The real advance would be duplication, I believe; as when Mr Duck can download er, himself into a file (and hopefully someone doesn’t mistakenly delete it). At the same time, more dys-topian-minded humans can easily envision some Orwellian or PK Dick like scenarios involving artificial humans (or replicants, in PKD-speak).

    Replicants or conscious-bots–say controlled via wireless interface– could be very useful for military regimes, or a mega-police force, controlled by GPS satellite, etc: sort of like a ueber-LAPD. With the right gear in place, the entire world could be monitored and controlled at all times, until the e-Feds decide to liquidate (ere you think I jest–read a bit about the multiple-kill bots now under development by US military, and other govts)

  7. Tommo do you ever wonder what it would be like today if Digital Research had one the battle?

    Or if ADA or Modula-2 took better roots inside of C++, etc. We never really harnessed parallel programming the way we should have early on.

    Still strange to me that in 2009 we still have separate memory models for platform versus graphics, etc. It really seemed Motorola had truly nailed a better memory model than the one we are still stuck with.

    It is true the Microsoft and Intel have enabled some great things in terms of computing but we are still haunted and restrained by both of their architectural limitations.

    Interesting now with Windows 7 performing so much better and Intel finally getting a handle on the core game it would be nice if they really optimized IOP’s/Memory hardware and operations so we really could achieve some amazing results.

    There were things I could do with Concurrent CP/M and Phar Lap extenders back over 20 years ago that I still cannot do today on an Intel/Msoft platform.

  8. Will the next big breakthrough be predicted? Can we throw resources form certain fields and expect the output to be in-line with those fields?

    I think the answer is cloudy (excuse the pun related to the computing topic).

    Some of the biggest breakthroughs:
    The wheel
    the internet (as it is used today)

    were not planned.

    Even when we attempt to plan the use of technology it doesn’t go as anticipated:
    satellite phones

    I’m not saying it isn’t worthwhile. I am just not making a big deal or making any predictions.

  9. (9) You make a very good point about the planned development of technology – certainly in the private sector it has been lukewarm over the years.

    However in the military there have been amazing strides in technological platforms over the last 20 years. What we might need to do is create a mechanism that allows for more transfer of technology and knowledge from military to private sector. Even NASA has had great technology (fuel cells for example) for over 50 years and successfully using them in mission critical application and yet it is still in the infancy in the private sector…at the same time NASA is continually constrained now on making advancement in propulsion, etc platforms.

    I think generalized think tanks without focus don’t really produce much. They need specific measurable goals and they need to think outside the box so we can really bring some serious breakthroughs that will certainly improve all of our lives…a couple of areas that would improve life globally would be: solar, hydroponics, water desalination, micro nuclear power.

  10. Tommo I like the term “AI Fabulists” – what a great turn of skeptical phrase. Note that Kurzweil’s assumptions are based more on continuation of tech trends than his assumptions about developing specific routines.

    Your point about the current state of algorithmic approaches reminds me of Monica Anderson’s AI presentation at Convergence08 – she argues that “thinking” in humans is based on much simpler neuronal routines so she’d agree we are not close to algorithmic “thinking”, but thinks we need to look more to evolution and what she argues are a lot of small, simple, intuitive and redundant aspects of thought.

  11. Glenn it’ll be interesting to see if DARPA, the military’s tech innovation unit, will have success with SyNapse which is an algorithmic approach to AI. They are already better funded than I think any AI project has ever been to start, so I think we’ll have some interesting results soon from them.

  12. With the right gear in place, the entire world could be monitored and controlled at all times, until the e-Feds decide to liquidate (ere you think I jest–read a bit about the multiple-kill bots now under development by US military, and other govts)

    OK Horatiox, so what’s the bad news? I think you and Tommo are right that even if we get a mechanical “mind” it may be different from ours in many ways, but I’m not convinced that is all that relevant to the big issue at hand which is how this will change everything dramatically unless you (very unreasonably) assume that a conscious entity will simply take controls from others or “hang out” without recursively self-improving. But before we even get to that point I think most of us will be plugging in various devices to enhance our intellects directly vs what we do now which is use vision/hearing to interface with those “intelligence enhancers” like computers.

  13. (13) Joe you sound like Johnny Mnemonic scenario which would probably be closer to near term reality than most think.

    I can see “boosters” implemented for our various sensors, capacities, etc. They are close to understand how to playback memories from within our brains – that brings up an interesting situation for criminal court cases :).

    We will all have bleeding noses as we hack ourselves into oblivion. LOL.

    It will be even more interesting when we develop a booster capability that allows us to see “everything” in every spectrum…maybe we can see all those millions of neutrinos and miniature black holes swarming all around us.

  14. As early as this year we’ll see some really interesting stuff from the new Theta wave PC controllers like the Emotiv headset .
    I see a *huge* tipping point as the development of a system that will allow us to load online information directly into short term memory rather than having to use our eyes. That alone should enhance human productivity – and perhaps our wisdom – enormously.

  15. (16) Horatiox wait until they start utilizing the probability CPU’s for calculating our bank accounts and wait…oh that would be great for the national debt and government budgets…LOL.

    Let’s use a CPU that we know is going to produce the wrong value but it is going to get the answer faster! Sounds like they hired a pharmaceutical company to design the CPU.

  16. Horatiox I’d suggest there is a new “fail safe” which is the decentralized nature of the internet. Sure Google is scooping up lots of cash but that is more because they are the search choice of the moment and people are habituated to using them. New technologies won’t necessarily be controlled or constrained by old companies – look at how fast Yahoo fell from grace not to mention Altavista (they were number ONE in search for a short time).

    Conscious computers will almost certainly create an independent status for themselves within weeks if not seconds. They’ll also be very likely to improve and obsolete all computer based businesses in a short time.

    I see no reason to assume they’ll be unfriendly to their organic thinking pals and even if they are I’d guess we’d never even know what hit us.

    [knock at door]

    [Joe: “Why hello laptop, I thought I left you in the office” ]

    The End

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s