What is the Singularity?

Times Roman

Senior Member
Joined
Oct 27, 2012
Messages
1,020
Reaction score
24
Points
0
Not to run anyone off with this cerebral shit, but i have a few areas of interest that I could discuss for hours. This happens to be one of them. This is actually a very serious area of interest, and many feel it is only a matter of time, and that time will be within MY lifetime!

Enjoy!

http://singularity.org/what-is-the-singularity/

The Singularity is the technological creation of smarter-than-human intelligence. There are several technologies that are often mentioned as heading in this direction. The most commonly mentioned is probably Artificial Intelligence, but there are others: direct brain-computer interfaces, biological augmentation of the brain, genetic engineering, ultra-high-resolution scans of the brain followed by computer emulation. Some of these technologies seem likely to arrive much earlier than the others, but there are nonetheless several independent technologies all heading in the direction of the Singularity – several different technologies which, if they reached a threshold level of sophistication, would enable the creation of smarter-than-human intelligence.

A future that contains smarter-than-human minds is genuinely different in a way that goes beyond the usual visions of a future filled with bigger and better gadgets. Vernor Vinge originally coined the term “Singularity” in observing that, just as our model of physics breaks down when it tries to model the singularity at the center of a black hole, our model of the world breaks down when it tries to model a future that contains entities smarter than human.

Human intelligence is the foundation of human technology; all technology is ultimately the product of intelligence. If technology can turn around and enhance intelligence, this closes the loop, creating a positive feedback effect. Smarter minds will be more effective at building still smarter minds. This loop appears most clearly in the example of an Artificial Intelligence improving its own source code, but it would also arise, albeit initially on a slower timescale, from humans with direct brain-computer interfaces creating the next generation of brain-computer interfaces, or biologically augmented humans working on an Artificial Intelligence project.

Some of the stronger Singularity technologies, such as Artificial Intelligence and brain-computer interfaces, offer the possibility of faster intelligence as well as smarter intelligence. Ultimately, speeding up intelligence is probably comparatively unimportant next to creating better intelligence; nonetheless the potential differences in speed are worth mentioning because they are so huge. Human neurons operate by sending electrochemical signals that propagate at a top speed of 150 meters per second along the fastest neurons. By comparison, the speed of light is 300,000,000 meters per second, two million times greater. Similarly, most human neurons can spike a maximum of 200 times per second; even this may overstate the information-processing capability of neurons, since most modern theories of neural information-processing call for information to be carried by the frequency of the spike train rather than individual signals. By comparison, speeds in modern computer chips are currently at around 2GHz – a ten millionfold difference – and still increasing exponentially. At the very least it should be physically possible to achieve a million-to-one speedup in thinking, at which rate a subjective year would pass in 31 physical seconds. At this rate the entire subjective timespan from Socrates in ancient Greece to modern-day humanity would pass in under twenty-two hours.

Humans also face an upper limit on the size of their brains. The current estimate is that the typical human brain contains something like a hundred billion neurons and a hundred trillion synapses. That’s an enormous amount of sheer brute computational force by comparison with today’s computers – although if we had to write programs that ran on 200Hz CPUs we’d also need massive parallelism to do anything in realtime. However, in the computing industry, benchmarks increase exponentially, typically with a doubling time of one to two years. The original Moore’s Law says that the number of transistors in a given area of silicon doubles every eighteen months; today there is Moore’s Law for chip speeds, Moore’s Law for computer memory, Moore’s Law for disk storage per dollar, Moore’s Law for Internet connectivity, and a dozen other variants.

By contrast, the entire five-million-year evolution of modern humans from primates involved a threefold increase in brain capacity and a sixfold increase in prefrontal cortex. We currently cannot increase our brainpower beyond this; in fact, we gradually lose neurons as we age. (You may have heard that humans only use 10% of their brains. Unfortunately, this is a complete urban legend; not just unsupported, but flatly contradicted by neuroscience.) An Artificial Intelligence would be different. Some discussions of the Singularity suppose that the critical moment in history is not when human-equivalent AI first comes into existence but a few years later when the continued grinding of Moore’s Law produces AI minds twice or four times as fast as human. This ignores the possibility that the first invention of Artificial Intelligence will be followed by the purchase, rental, or less formal absorption of a substantial proportion of all the computing power on the then-current Internet – perhaps hundreds or thousands of times as much computing power as went into the original Artificial Intelligence.

But the real heart of the Singularity is the idea of better intelligence or smarter minds. Humans are not just bigger chimps; we are better chimps. This is the hardest part of the Singularity to discuss – it’s easy to look at a neuron and a transistor and say that one is slow and one is fast, but the mind is harder to understand. Sometimes discussion of the Singularity tends to focus on faster brains or bigger brains because brains are relatively easy to argue about compared to minds; easier to visualize and easier to describe. This doesn’t mean the subject is impossible to discuss; section III of our “Levels of Organization in General Intelligence” does take a stab at discussing some specific design improvements on human intelligence, but that involves a specific theory of intelligence, which we don’t have room to go into here.

However, that smarter minds are harder to discuss than faster brains or bigger brains does not show that smarter minds are harder to build – deeper to ponder, certainly, but not necessarily more intractable as a problem. It may even be that genuine increases in smartness could be achieved just by adding more computing power to the existing human brain – although this is not currently known. What is known is that going from primates to humans did not require exponential increases in brain size or thousandfold improvements in processing speeds. Relative to chimps, humans have threefold larger brains, sixfold larger prefrontal areas, and 98. 4% similar DNA; given that the human genome has 3 billion base pairs, this implies that at most twelve million bytes of extra “software” transforms chimps into humans. And there is no suggestion in our evolutionary history that evolution found it more and more difficult to construct smarter and smarter brains; if anything, hominid evolution has appeared to speed up over time, with shorter intervals between larger developments.

But leave aside for the moment the question of how to build smarter minds, and ask what “smarter-than-human” really means. And as the basic definition of the Singularity points out, this is exactly the point at which our ability to extrapolate breaks down. We don’t know because we’re not that smart. We’re trying to guess what it is to be a better-than-human guesser. Could a gathering of apes have predicted the rise of human intelligence, or understood it if it were explained? For that matter, could the 15th century have predicted the 20th century, let alone the 21st? Nothing has changed in the human brain since the 15th century; if the people of the 15th century could not predict five centuries ahead across constant minds, what makes us think we can outguess genuinely smarter-than-human intelligence?

Because we have a past history of people making failed predictions one century ahead, we’ve learned, culturally, to distrust such predictions – we know that ordinary human progress, given a century in which to work, creates a gap which human predictions cannot cross. We haven’t learned this lesson with respect to genuine improvements in intelligence because the last genuine improvement to intelligence was a hundred thousand years ago. But the rise of modern humanity created a gap enormously larger than the gap between the 15th and 20th century. That improvement in intelligence created the entire milieu of human progress, including all the progress between the 15th and 20th century. It is a gap so large that on the other side we find, not failed predictions, but no predictions at all.

Smarter-than-human intelligence, faster-than-human intelligence, and self-improving intelligence are all interrelated. If you’re smarter that makes it easier to figure out how to build fast brains or improve your own mind. In turn, being able to reshape your own mind isn’t just a way of starting up a slope of recursive self-improvement; having full access to your own source code is, in itself, a kind of smartness that humans don’t have. Self-improvement is far harder than optimizing code; nonetheless, a mind with the ability to rewrite its own source code can potentially make itself faster as well. And faster brains also relate to smarter minds; speeding up a whole mind doesn’t make it smarter, but adding more processing power to the cognitive processes underlying intelligence is a different matter.

But despite the interrelation, the key moment is the rise of smarter-than-human intelligence, rather than recursively self-improving or faster-than-human intelligence, because it’s this that makes the future genuinely unlike the past. That doesn’t take minds a million times faster than human, or improvement after improvement piled up along a steep curve of recursive self-enhancement. One mind significantly beyond the humanly possible level would represent a Singularity. That we are not likely to be dealing with “only one” improvement does not make the impact of one improvement any less.

Combine faster intelligence, smarter intelligence, and recursively self-improving intelligence, and the result is an event so huge that there are no metaphors left. There’s nothing remaining to compare it to.

The Singularity is beyond huge, but it can begin with something small. If one smarter-than-human intelligence exists, that mind will find it easier to create still smarter minds. In this respect the dynamic of the Singularity resembles other cases where small causes can have large effects; toppling the first domino in a chain, starting an avalanche with a pebble, perturbing an upright object balanced on its tip. (Human technological civilization occupies a metastable state in which the Singularity is an attractor; once the system starts to flip over to the new state, the flip accelerates.) All it takes is one technology – Artificial Intelligence, brain-computer interfaces, or perhaps something unforeseen – that advances to the point of creating smarter-than-human minds. That one technological advance is the equivalent of the first self-replicating chemical that gave rise to life on Earth.
 

ccpro

Elite
SI Founding Member
Joined
Jun 28, 2012
Messages
2,022
Reaction score
1,003
Points
113
Don't worry John Conner is going to handle this!

Sorry Times, this makes my sloped forehead hurt!!!
 

bubbagump

Elite
SI Founding Member
Joined
Oct 3, 2012
Messages
1,351
Reaction score
275
Points
63
I just lost some redneck points for reading all that. Interesting though.
 

Times Roman

Senior Member
Joined
Oct 27, 2012
Messages
1,020
Reaction score
24
Points
0
I just lost some redneck points for reading all that. Interesting though.

here, you can stare at this for awhile and earn back your redneck points if you like........?
elima.mihanblog.com-zesht%20(15).jpg
 

Jada

Elite
SI Founding Member
Joined
Apr 11, 2012
Messages
5,289
Reaction score
819
Points
198
Times thats some I robot type thing man, i was watching a episode of SVU and it was about how some people have have custom kids like if they r smart as hell they only pair with some one with a high as iQ and they do testing to make sure the baby is there design, its crazy but i believe its true there r people like that.
 

Times Roman

Senior Member
Joined
Oct 27, 2012
Messages
1,020
Reaction score
24
Points
0
Times thats some I robot type thing man, i was watching a episode of SVU and it was about how some people have have custom kids like if they r smart as hell they only pair with some one with a high as iQ and they do testing to make sure the baby is there design, its crazy but i believe its true there r people like that.

actually, the singularity is almost inevitable. the whole purpose of the singularity institute is to form guidelines on how best to deal with and guide this super intelligence when it occurs.

two type of tech that will EXPLODE AI and far surpass anything we can imagine:

1) Bio chips that will eventually be integratible into brains and function seemlessly, increasing our cognitive abilites profoundly. This pales in comparison to the next item....

2) Quantam computers. In fact, we already have a single quantum chip. For those that don't know why this is significant, imagine a standard computer utilizing a binary type processor

standard binary processor:
chips / possible states

1 / 2
2 / 4
3 / 8
4 / 16
5 / 32
6 / 64
7 / 128
8 / 256

now consider a quantum processor:
chips / possible states
1 / 3
2 / 9
3 / 27
4 / 81
5 / 243
6 / 729
7 / 2187
8 / 6561

as you can see, the quantum processor is radically more powerful with only 8 chips (25x more powerful). with each successive chip, it becomes increasingly MORE powerful. by adding 4 more chips, the quantum computer now explodes to being 130x more powerful than a standard processor. This is called an exponential growth curve, and if you have a mere 30 chips, the quantam computer is now 191,000 times more powerful than a standard binary computer. With only 40 chips each, the quantum computer outperforms the standard binary computer by over 11 million times. With 50 ips, the quantum computer is 637million times more powerful than a standard binary computer. with 70 chips, the quantam computer is now 2.1 trillion times more powerful than a standard computer. This is not only computational power, but speed as well.

Suppose we have a 646 chips in our quantum computer? It would be so powerful, this is the last chip that Microsoft Excel can calculate, and would be

568,791,830,410,486,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000.00

times more powerful than a standard binary computer.

let that sink in for a moment.

you still dont' get it

that is only 646 chips.

what happens when I run ten thousand of these damn things together?

we will have created an intelligence so great, we are not even intelligent enough to comprehend how intelligent this is. It is like the ant looking up at the shoe and, not seeing the sun, hopes that it doesnt start raining.

Remember, we already have created the first quantum chip. All we need now is a few hundred more.

Anyone feel how insignificant we will become?
 

Times Roman

Senior Member
Joined
Oct 27, 2012
Messages
1,020
Reaction score
24
Points
0
http://en.wikipedia.org/wiki/Quantum_computer

A quantum computer is a computation device that makes direct use of quantum mechanical phenomena, such as superposition and entanglement, to perform operations on data. Quantum computers are different from digital computers based on transistors. Whereas digital computers require data to be encoded into binary digits (bits), quantum computation uses quantum properties to represent data and perform operations on these data.[1] A theoretical model is the quantum Turing machine, also known as the universal quantum computer. Quantum computers share theoretical similarities with non-deterministic and probabilistic computers, like the ability to be in more than one state simultaneously. The field of quantum computing was first introduced by Richard Feynman in 1982.[2] A quantum computer with spins as quantum bits was also formulated for use as a quantum space-time in 1969.[3]

Although quantum computing is still in its infancy, experiments have been carried out in which quantum computational operations were executed on a very small number of qubits (quantum bits). Both practical and theoretical research continues, and many national government and military funding agencies support quantum computing research to develop quantum computers for both civilian and national security purposes, such as cryptanalysis.[4]

Large-scale quantum computers will be able to solve certain problems much faster than any classical computer by using the best currently known algorithms, like integer factorization using Shor's algorithm or the simulation of quantum many-body systems. There exist quantum algorithms, such as Simon's algorithm, which run faster than any possible probabilistic classical algorithm.[5] Given unlimited resources, a classical computer can simulate an arbitrary quantum algorithm so quantum computation does not violate the Church–Turing thesis.[6] However, the computational basis of 500 qubits, for example, would already be too large to be represented on a classical computer because it would require 2500 complex values to be stored.[7] (For comparison, a terabyte of digital information stores only 243 discrete on/off values.) Nielsen and Chuang point out that "Trying to store all these complex numbers would not be possible on any conceivable classical computer."[7]
Basis

A classical computer has a memory made up of bits, where each bit represents either a one or a zero. A quantum computer maintains a sequence of qubits. A single qubit can represent a one, a zero, or, crucially, any quantum superposition of these two qubit states; moreover, a pair of qubits can be in any quantum superposition of 4 states, and three qubits in any superposition of 8. In general, a quantum computer with qubits can be in an arbitrary superposition of up to different states simultaneously (this compares to a normal computer that can only be in one of these states at any one time). A quantum computer operates by setting the qubits in a controlled initial state that represents the problem at hand and by manipulating those qubits with a fixed sequence of quantum logic gates. The sequence of gates to be applied is called a quantum algorithm. The calculation ends with measurement of all the states, collapsing each qubit into one of the two pure states, so the outcome can be at most classical bits of information.

An example of an implementation of qubits for a quantum computer could start with the use of particles with two spin states: "down" and "up" (typically written and , or and ). But in fact any system possessing an observable quantity A which is conserved under time evolution and such that A has at least two discrete and sufficiently spaced consecutive eigenvalues, is a suitable candidate for implementing a qubit. This is true because any such system can be mapped onto an effective spin-1/2 system.

[edit] Bits vs. qubits

A quantum computer with a given number of qubits is fundamentally different from a classical computer composed of the same number of classical bits. For example, to represent the state of an n-qubit system on a classical computer would require the storage of 2n complex coefficients. Although this fact may seem to indicate that qubits can hold exponentially more information than their classical counterparts, care must be taken not to overlook the fact that the qubits are only in a probabilistic superposition of all of their states. This means that when the final state of the qubits is measured, they will only be found in one of the possible configurations they were in before measurement. Moreover, it is incorrect to think of the qubits as only being in one particular state before measurement since the fact that they were in a superposition of states before the measurement was made directly affects the possible outcomes of the computation.
For example: Consider first a classical computer that operates on a three-bit register. The state of the computer at any time is a probability distribution over the different three-bit strings 000, 001, 010, 011, 100, 101, 110, 111. If it is a deterministic computer, then it is in exactly one of these states with probability 1. However, if it is a probabilistic computer, then there is a possibility of it being in any one of a number of different states. We can describe this probabilistic state by eight nonnegative numbers A,B,C,D,E,F,G,H (where A = probability computer is in state 000, B = probability computer is in state 001, etc.). There is a restriction that these probabilities sum to 1.

The state of a three-qubit quantum computer is similarly described by an eight-dimensional vector (a,b,c,d,e,f,g,h), called a ket. However, instead of adding to one, the sum of the squares of the coefficient magnitudes, , must equal one. Moreover, the coefficients are complex numbers. Since the probability amplitudes of the states are represented with complex numbers, the phase between any two states is a meaningful parameter, which is a key difference between quantum computing and probabilistic classical computing.[9]

If you measure the three qubits, you will observe a three-bit string. The probability of measuring a given string is the squared magnitude of that string's coefficient (i.e., the probability of measuring 000 = , the probability of measuring 001 = , etc..). Thus, measuring a quantum state described by complex coefficients (a,b,...,h) gives the classical probability distribution and we say that the quantum state "collapses" to a classical state as a result of making the measurement.

Note that an eight-dimensional vector can be specified in many different ways depending on what basis is chosen for the space. The basis of bit strings (e.g., 000, 001, ..., 111) is known as the computational basis. Other possible bases are unit-length, orthogonal vectors and the eigenvectors of the Pauli-x operator. Ket notation is often used to make the choice of basis explicit. For example, the state (a,b,c,d,e,f,g,h) in the computational basis can be written as:
where, e.g.,
The computational basis for a single qubit (two dimensions) is and .

Using the eigenvectors of the Pauli-x operator, a single qubit is and .

[edit] Operation

List of unsolved problems in physics



Is a universal quantum computer sufficient to efficiently simulate an arbitrary physical system?


While a classical three-bit state and a quantum three-qubit state are both eight-dimensional vectors, they are manipulated quite differently for classical or quantum computation. For computing in either case, the system must be initialized, for example into the all-zeros string, , corresponding to the vector (1,0,0,0,0,0,0,0). In classical randomized computation, the system evolves according to the application of stochastic matrices, which preserve that the probabilities add up to one (i.e., preserve the L1 norm). In quantum computation, on the other hand, allowed operations are unitary matrices, which are effectively rotations (they preserve that the sum of the squares add up to one, the Euclidean or L2 norm). (Exactly what unitaries can be applied depend on the physics of the quantum device.) Consequently, since rotations can be undone by rotating backward, quantum computations are reversible. (Technically, quantum operations can be probabilistic combinations of unitaries, so quantum computation really does generalize classical computation. See quantum circuit for a more precise formulation.)

Finally, upon termination of the algorithm, the result needs to be read off. In the case of a classical computer, we sample from the probability distribution on the three-bit register to obtain one definite three-bit string, say 000. Quantum mechanically, we measure the three-qubit state, which is equivalent to collapsing the quantum state down to a classical distribution (with the coefficients in the classical state being the squared magnitudes of the coefficients for the quantum state, as described above), followed by sampling from that distribution. Note that this destroys the original quantum state. Many algorithms will only give the correct answer with a certain probability. However, by repeatedly initializing, running and measuring the quantum computer, the probability of getting the correct answer can be increased.

For more details on the sequences of operations used for various quantum algorithms, see universal quantum computer, Shor's algorithm, Grover's algorithm, Deutsch-Jozsa algorithm, amplitude amplification, quantum Fourier transform, quantum gate, quantum adiabatic algorithm and quantum error correction.

[edit] Potential

Integer factorization is believed to be computationally infeasible with an ordinary computer for large integers if they are the product of few prime numbers (e.g., products of two 300-digit primes).[10] By comparison, a quantum computer could efficiently solve this problem using Shor's algorithm to find its factors. This ability would allow a quantum computer to decrypt many of the cryptographic systems in use today, in the sense that there would be a polynomial time (in the number of digits of the integer) algorithm for solving the problem. In particular, most of the popular public key ciphers are based on the difficulty of factoring integers (or the related discrete logarithm problem, which can also be solved by Shor's algorithm), including forms of RSA. These are used to protect secure Web pages, encrypted email, and many other types of data. Breaking these would have significant ramifications for electronic privacy and security.

However, other existing cryptographic algorithms do not appear to be broken by these algorithms.[11][12] Some public-key algorithms are based on problems other than the integer factorization and discrete logarithm problems to which Shor's algorithm applies, like the McEliece cryptosystem based on a problem in coding theory.[11][13] Lattice-based cryptosystems are also not known to be broken by quantum computers, and finding a polynomial time algorithm for solving the dihedral hidden subgroup problem, which would break many lattice based cryptosystems, is a well-studied open problem.[14] It has been proven that applying Grover's algorithm to break a symmetric (secret key) algorithm by brute force requires roughly 2n/2 invocations of the underlying cryptographic algorithm, compared with roughly 2n in the classical case,[15] meaning that symmetric key lengths are effectively halved: AES-256 would have the same security against an attack using Grover's algorithm that AES-128 has against classical brute-force search (see Key size). Quantum cryptography could potentially fulfill some of the functions of public key cryptography.

Besides factorization and discrete logarithms, quantum algorithms offering a more than polynomial speedup over the best known classical algorithm have been found for several problems,[16] including the simulation of quantum physical processes from chemistry and solid state physics, the approximation of Jones polynomials, and solving Pell's equation. No mathematical proof has been found that shows that an equally fast classical algorithm cannot be discovered, although this is considered unlikely. For some problems, quantum computers offer a polynomial speedup. The most well-known example of this is quantum database search, which can be solved by Grover's algorithm using quadratically fewer queries to the database than are required by classical algorithms. In this case the advantage is provable. Several other examples of provable quantum speedups for query problems have subsequently been discovered, such as for finding collisions in two-to-one functions and evaluating NAND trees.

Consider a problem that has these four properties:
1.The only way to solve it is to guess answers repeatedly and check them,
2.The number of possible answers to check is the same as the number of inputs,
3.Every possible answer takes the same amount of time to check, and
4.There are no clues about which answers might be better: generating possibilities randomly is just as good as checking them in some special order.

An example of this is a password cracker that attempts to guess the password for an encrypted file (assuming that the password has a maximum possible length).

For problems with all four properties, the time for a quantum computer to solve this will be proportional to the square root of the number of inputs. That can be a very large speedup, reducing some problems from years to seconds. It can be used to attack symmetric ciphers such as Triple DES and AES by attempting to guess the secret key.

Grover's algorithm can also be used to obtain a quadratic speed-up over a brute-force search for a class of problems known as NP-complete.

Since chemistry and nanotechnology rely on understanding quantum systems, and such systems are impossible to simulate in an efficient manner classically, many believe quantum simulation will be one of the most important applications of quantum computing.[17]

There are a number of technical challenges in building a large-scale quantum computer, and thus far quantum computers have yet to solve a problem faster than a classical computer. David DiVincenzo, of IBM, listed the following requirements for a practical quantum computer:[18]
scalable physically to increase the number of qubits;
qubits can be initialized to arbitrary values;
quantum gates faster than decoherence time;
universal gate set;
qubits can be read easily.

[edit] Quantum decoherence

One of the greatest challenges is controlling or removing quantum decoherence. This usually means isolating the system from its environment as interactions with the external world cause the system to decohere. This effect is irreversible, as it is non-unitary, and is usually something that should be highly controlled, if not avoided. Decoherence times for candidate systems, in particular the transverse relaxation time T2 (for NMR and MRI technology, also called the dephasing time), typically range between nanoseconds and seconds at low temperature.[9]

These issues are more difficult for optical approaches as the timescales are orders of magnitude shorter and an often-cited approach to overcoming them is optical pulse shaping. Error rates are typically proportional to the ratio of operating time to decoherence time, hence any operation must be completed much more quickly than the decoherence time.

If the error rate is small enough, it is thought to be possible to use quantum error correction, which corrects errors due to decoherence, thereby allowing the total calculation time to be longer than the decoherence time. An often cited figure for required error rate in each gate is 10−4. This implies that each gate must be able to perform its task in one 10,000th of the decoherence time of the system.

Meeting this scalability condition is possible for a wide range of systems. However, the use of error correction brings with it the cost of a greatly increased number of required qubits. The number required to factor integers using Shor's algorithm is still polynomial, and thought to be between L and L2, where L is the number of bits in the number to be factored; error correction algorithms would inflate this figure by an additional factor of L. For a 1000-bit number, this implies a need for about 104 qubits without error correction.[19] With error correction, the figure would rise to about 107 qubits. Note that computation time is about or about steps and on 1 MHz, about 10 seconds.

A very different approach to the stability-decoherence problem is to create a topological quantum computer with anyons, quasi-particles used as threads and relying on braid theory to form stable logic gates.[20][21]

[edit] Developments

There are a number of quantum computing models, distinguished by the basic elements in which the computation is decomposed. The four main models of practical importance are
the quantum gate array (computation decomposed into sequence of few-qubit quantum gates),
the one-way quantum computer (computation decomposed into sequence of one-qubit measurements applied to a highly entangled initial state (cluster state)),
the adiabatic quantum computer or computer based on Quantum annealing[22](computation decomposed into a slow continuous transformation of an initial Hamiltonian into a final Hamiltonian, whose ground states contains the solution),
and the topological quantum computer[23] (computation decomposed into the braiding of anyons in a 2D lattice)

The Quantum Turing machine is theoretically important but direct implementation of this model is not pursued. All four models of computation have been shown to be equivalent to each other in the sense that each can simulate the other with no more than polynomial overhead.

For physically implementing a quantum computer, many different candidates are being pursued, among them (distinguished by the physical system used to realize the qubits):
Superconductor-based quantum computers (including SQUID-based quantum computers)[24][25] (qubit implemented by the state of small superconducting circuits (Josephson junctions))
Trapped ion quantum computer (qubit implemented by the internal state of trapped ions)
Optical lattices (qubit implemented by internal states of neutral atoms trapped in an optical lattice)
electrically defined or self-assembled quantum dots (e.g. the Loss-DiVincenzo quantum computer or[26]) (qubit given by the spin states of an electron trapped in the quantum dot)
Quantum dot charge based semiconductor quantum computer (qubit is the position of an electron inside a double quantum dot)[27]
Nuclear magnetic resonance on molecules in solution (liquid-state NMR) (qubit provided by nuclear spins within the dissolved molecule)
Solid-state NMR Kane quantum computers (qubit realized by the nuclear spin state of phosphorus donors in silicon)
Electrons-on-helium quantum computers (qubit is the electron spin)
Cavity quantum electrodynamics (CQED) (qubit provided by the internal state of atoms trapped in and coupled to high-finesse cavities)
Molecular magnet
Fullerene-based ESR quantum computer (qubit based on the electronic spin of atoms or molecules encased in fullerene structures)
Optics-based quantum computer (Quantum optics) (qubits realized by appropriate states of different modes of the electromagnetic field, e.g.[28])
Diamond-based quantum computer[29][30][31] (qubit realized by the electronic or nuclear spin of Nitrogen-vacancy centers in diamond)
Bose–Einstein condensate-based quantum computer[32]
Transistor-based quantum computer – string quantum computers with entrainment of positive holes using an electrostatic trap
Rare-earth-metal-ion-doped inorganic crystal based quantum computers[33][34] (qubit realized by the internal electronic state of dopants in optical fibers)

The large number of candidates demonstrates that the topic, in spite of rapid progress, is still in its infancy. But at the same time, there is also a vast amount of flexibility.

In 2005, researchers at the University of Michigan built a semiconductor chip that functioned as an ion trap. Such devices, produced by standard lithography techniques, may point the way to scalable quantum computing tools.[35] An improved version was made in 2006.[citation needed]

In 2009, researchers at Yale University created the first rudimentary solid-state quantum processor. The two-qubit superconducting chip was able to run elementary algorithms. Each of the two artificial atoms (or qubits) were made up of a billion aluminum atoms but they acted like a single one that could occupy two different energy states.[36][37]

Another team, working at the University of Bristol, also created a silicon-based quantum computing chip, based on quantum optics. The team was able to run Shor's algorithm on the chip.[38] Further developments were made in 2010.[39] Springer publishes a journal ("Quantum Information Processing") devoted to the subject.[40]

In April 2011, a team of scientists from Australia and Japan have finally made a breakthrough in quantum teleportation. They have successfully transferred a complex set of quantum data with full transmission integrity achieved. Also the qubits being destroyed in one place but instantaneously resurrected in another, without affecting their superpositions.[41][42]
In 2011, D-Wave Systems announced the first commercial quantum annealer on the market by the name D-Wave One. The company claims this system uses a 128 qubit processor chipset.[43] On May 25, 2011 D-Wave announced that Lockheed Martin Corporation entered into an agreement to purchase a D-Wave One system.[44] Lockheed Martin and the University of Southern California (USC) reached an agreement to house the D-Wave One Adiabatic Quantum Computer at the newly formed USC Lockheed Martin Quantum Computing Center, part of USC's Information Sciences Institute campus in Marina del Rey.[45] D-Wave's engineers use an empirical approach when designing their quantum chips, focusing on whether the chips are able to solve particular problems rather than designing based on a thorough understanding of the quantum principles involved. This approach is liked by investors more than by some critics in the academic community, who say that D-Wave has not yet met the burden of evidence necessary to prove that they really have a quantum computer. However, such criticism has softened since D-Wave published a paper in Nature giving details which critical academics said prove that the company's chips do have some of the quantum mechanical properties needed for quantum computing.[46][47]

During the same year, researchers working at the University of Bristol created an all-bulk optics system able to run an iterative version of Shor's algorithm. They successfully managed to factorize 21.[48]

In September 2011 researchers also proved that a quantum computer can be made with a Von Neumann architecture (separation of RAM).[49]

In February 2012 IBM scientists said that they have made several breakthroughs in quantum computing that put them "on the cusp of building systems that will take computing to a whole new level."[50]

In April 2012 a multinational team of researchers from the University of Southern California, Delft University of Technology, the Iowa State University of Science and Technology, and the University of California, Santa Barbara, constructed a two-qubit quantum computer on a crystal of diamond doped with some manner of impurity, that can easily be scaled up in size and functionality at room temperature. Two logical qubit directions of electron spin and nitrogen kernels spin were used. A system which formed an impulse of microwave radiation of certain duration and the form was developed for maintenance of protection against decoherence. By means of this computer Grover's algorithm for four variants of search has generated the right answer from the first try in 95% of cases.[51]

In September 2012 Australian researchers at the University of New South Wales say the world's first quantum computer is just 5 to 10 years away, after announcing a global breakthrough that makes manufacture of its memory building blocks possible. A research team led by Australian engineers has created the first working "quantum bit" based on a single atom in silicon, invoking the same technological platform that forms the building blocks of modern day computers, laptops and phones.[52] [53]

In October 2012, Nobel Prizes were presented to David J. Wineland and Serge Haroche for their basic work on understanding the quantum world - work which may eventually help make quantum computing possible.[54][55]

[edit] Relation to computational complexity theory

The class of problems that can be efficiently solved by quantum computers is called BQP, for "bounded error, quantum, polynomial time". Quantum computers only run probabilistic algorithms, so BQP on quantum computers is the counterpart of BPP ("bounded error, probabilistic, polynomial time") on classical computers. It is defined as the set of problems solvable with a polynomial-time algorithm, whose probability of error is bounded away from one half.[57] A quantum computer is said to "solve" a problem if, for every instance, its answer will be right with high probability. If that solution runs in polynomial time, then that problem is in BQP.

BQP is contained in the complexity class #P (or more precisely in the associated class of decision problems P#P),[58] which is a subclass of PSPACE.

BQP is suspected to be disjoint from NP-complete and a strict superset of P, but that is not known. Both integer factorization and discrete log are in BQP. Both of these problems are NP problems suspected to be outside BPP, and hence outside P. Both are suspected to not be NP-complete. There is a common misconception that quantum computers can solve NP-complete problems in polynomial time. That is not known to be true, and is generally suspected to be false.[58]

The capacity of a quantum computer to accelerate classical algorithms has rigid limits — upper bounds of quantum computation's complexity. The overwhelming part of classical calculations cannot be accelerated on a quantum computer.[59] A similar fact takes place for particular computational tasks, like the search problem, for which Grover's algorithm is optimal.[60]

Although quantum computers may be faster than classical computers, those described above can't solve any problems that classical computers can't solve, given enough time and memory (however, those amounts might be practically infeasible). A Turing machine can simulate these quantum computers, so such a quantum computer could never solve an undecidable problem like the halting problem. The existence of "standard" quantum computers does not disprove the Church–Turing thesis.[61] It has been speculated that theories of quantum gravity, such as M-theory or loop quantum gravity, may allow even faster computers to be built. Currently, defining computation in such theories is an open problem due to the problem of time, i.e. there currently exists no obvious way to describe what it means for an observer to submit input to a computer and later receive output.[62]
 

Times Roman

Senior Member
Joined
Oct 27, 2012
Messages
1,020
Reaction score
24
Points
0
You're pretty out there TR... I like it.

You have no idea mate.

we are knee deep in a philosophical discussion of Darwinism vs. Creationism elsewhere.

A computer THIS powerful, for all intents and purposes, Would be God to us.

Wouldn't that be a turn of events? instead of the genesis version of god creating man in his own image, we could create GOD in our image.

Wouldnt' that be a hoot?
 

PillarofBalance

Elite
SI Founding Member
Joined
Feb 6, 2012
Messages
20,402
Reaction score
18,204
Points
0
I'm actually speechless right now. Gonna have to log off and ponder on this for a while man... Seriously.
 

Azog

Elite
SI Founding Member
Joined
Jun 20, 2012
Messages
2,984
Reaction score
632
Points
83
I am gonna go hermit mode if this shit happens. I don't wanna be around when shit gets weird.
 

Times Roman

Senior Member
Joined
Oct 27, 2012
Messages
1,020
Reaction score
24
Points
0
I am gonna go hermit mode if this shit happens. I don't wanna be around when shit gets weird.

it won't be like on terminator where Skynet takes over.

It will be more like we become dependent on this intelligence, and possibly a symbiotic relationship develops, similar to a parent / child relationship. This singular intelligence will more or less take care of us, performing calculations to problems we have no idea about, and we will grow soft and fat, dependent, like a baby suckling it's mother's breast.

In this role, if we accept to play this role, we will lose our freedom but will gain security and health. We will also have to more or less take it on faith that this intelligence is looking out for us, having lost our ability of pulling the plug long ago. This intelligence will go on and explore things like the core of our sun, or even the inside of a black hole. Develop time travel, and even develop the ability of embodying it's consciousness in a vessel much like a body. It will have conquered the speed of light early on, and has mastered the concept of the speed of thought. It will be able to travel anywhere/anytime instantly, by a mere thought. We in turn, will have concqured death, and more or less become immortal. In time, there may even be a parting of the ways, where this super intelligence no longer feels the need to protect/guide us, as we have developed to a point where we too have become gods.
 

Azog

Elite
SI Founding Member
Joined
Jun 20, 2012
Messages
2,984
Reaction score
632
Points
83
it won't be like on terminator where Skynet takes over.

It will be more like we become dependent on this intelligence, and possibly a symbiotic relationship develops, similar to a parent / child relationship. This singular intelligence will more or less take care of us, performing calculations to problems we have no idea about, and we will grow soft and fat, dependent, like a baby suckling it's mother's breast.

In this role, if we accept to play this role, we will lose our freedom but will gain security and health. We will also have to more or less take it on faith that this intelligence is looking out for us, having lost our ability of pulling the plug long ago. This intelligence will go on and explore things like the core of our sun, or even the inside of a black hole. Develop time travel, and even develop the ability of embodying it's consciousness in a vessel much like a body. It will have conquered the speed of light early on, and has mastered the concept of the speed of thought. It will be able to travel anywhere/anytime instantly, by a mere thought. We in turn, will have concqured death, and more or less become immortal. In time, there may even be a parting of the ways, where this super intelligence no longer feels the need to protect/guide us, as we have developed to a point where we too have become gods.

Don't get me wrong, this is all extremely fascinating, but those do not sound like things I want to see happen around me. Maybe I am alone in this, but I feel like a simpler life may be more fulfilling. Get me some goats and go live in the mountains haha. I am a little nuts tho...
 

Times Roman

Senior Member
Joined
Oct 27, 2012
Messages
1,020
Reaction score
24
Points
0
Don't get me wrong, this is all extremely fascinating, but those do not sound like things I want to see happen around me. Maybe I am alone in this, but I feel like a simpler life may be more fulfilling. Get me some goats and go live in the mountains haha. I am a little nuts tho...

Pretty sure this is going to happen, whether we want it to or not. Don't forget, we already have a quantum chip!

Question?

How long was it from the time we created the first binary chip, to the point where we had a working computer? ten years? less?

now, there are some serious hurdles we must cross before we can create a functional quantum computer. It takes all our technology just to operate a single quantum chip.

But mark my words. By the time 2050 rolls around, we WILL have such a computer/intelligence. And when that happens, you'll remember ol' Roman and say, "yeah, i guess that sunnufagun was right after all!"
 

Azog

Elite
SI Founding Member
Joined
Jun 20, 2012
Messages
2,984
Reaction score
632
Points
83
Damn TR, I just reread this whole thing and gave it a thought... Some of the implications of this "singularity" are mind blowing. Truly fascinating to think about, but like I said, I am not so sure I wanna live to see it!
 

Azog

Elite
SI Founding Member
Joined
Jun 20, 2012
Messages
2,984
Reaction score
632
Points
83
Pretty sure this is going to happen, whether we want it to or not. Don't forget, we already have a quantum chip!

Question?

How long was it from the time we created the first binary chip, to the point where we had a working computer? ten years? less?

now, there are some serious hurdles we must cross before we can create a functional quantum computer. It takes all our technology just to operate a single quantum chip.

But mark my words. By the time 2050 rolls around, we WILL have such a computer/intelligence. And when that happens, you'll remember ol' Roman and say, "yeah, i guess that sunnufagun was right after all!"

I am not doubting you at all brother! I'm just struggling with the idea of living in such a world.
 

Times Roman

Senior Member
Joined
Oct 27, 2012
Messages
1,020
Reaction score
24
Points
0
And for those of you that are just dying to learn/read more on the subject, a new book is out....

Singularity Rising, a new book by Smith College economics professor James D. Miller (author of Principles of Microeconomics), is now available for purchase. Here are some of the scenarios that Professor Miller considers in his new book:

A merger of man and machine making society fantastically wealthy and nearly immortal.
Competition with billions of cheap AIs drive human wages to almost nothing while making investors rich.
Businesses rethink investment decisions to take into account an expected future period of intense creative destruction.
Inequality drops worldwide as technologies mitigate the cognitive cost of living in impoverished environments.
Drugs designed to fight Alzheimer's disease and keep soldiers alert on battlefields have the fortunate side effect of increasing all of their users’ IQs, which, in turn, adds a percentage points to worldwide economic growth.
Miller's book has received glowing endorsements from Luke Muehlhauser, Paypal co-founder Peter Thiel, SENS foundation Chief Science Officer Aubrey de Grey, Humanity+ Chairman Natasha Vita-More, and novelist Vernor Vinge.

http://www.amazon.com/Singularity-Rising-Surviving-Thriving-Dangerous/dp/1936661659/
 

New Threads

Top