An Analysis of the Potential Exotic Forms of Computing and their Applications

DOI : 10.17577/IJERTV12IS050298

Download Full-Text PDF Cite this Publication

Text Only Version

An Analysis of the Potential Exotic Forms of Computing and their Applications

Karan Chawla

Ashoka University Gurugram, Haryana, India

AbstractEvery electron, photon, and other elementary particle has pieces of information that are altered each time two of them interact. Information content and physical existence are intricately connected. One of the main drivers for growing the area of quantum computation has been the prospect of creating a quantum computer capable of carrying out Shor's algorithm for huge numbers. However, it is critical to recognise that quantum computers will probably only significantly speed up a small subset of issues if one wants to acquire a more comprehensive perspective on them. Researchers are striving to create algorithms that exhibit quantum speedups as well as identify the issues that are most suited for them. In general, it is anticipated that quantum computers would be of great assistance with problems involving optimisation, which are crucial to everything from financial trading to defence. Spacetime, like other physical systems, is discrete, according to quantum physics. Spacetime is frothy and foamy on minuscule scales, making it hard to determine distances and times with precise precision. The amount of information that can be crammed into a specific amount of space depends on the size of the bits, which cannot be smaller than the foamy cells. By controlling the matter that falls into it, one may instruct a black hole to do any desired computation. Black hole features are inextricably tied to spacetime properties. The exotic forms of computing consists of applications of these exotic forms into quantum computing. Some of them are Black hole computing, Quantum Chromodynamics and Quantum Eletrodynamics. This review paper analyses the three types of potential forms of computing and its application in quantum computing.

Keywords Quantum Electrodynamics, Quantum Chromodynamics, Black Hole Computing..

  1. INTRODUCTION

    While both conventional and quantum computers aim to find solutions, they do so in fundamentally different ways by manipulating data. This section introduces two fundamental concepts of quantum physics, superposition and entanglement, which are essential to the understanding of what makes quantum computers special.

    Superposition is the paradoxical capacity of a quantum item, such as an electron, to simultaneously exist in numerous "states." One of these states for an electron may correspond to the atom's lowest energy level, whilst another one may correspond to the first excited level. When an electron is produced in a superposition of these two states, it has an equal chance of being in the bottom and higher states. Only once this superposition is eliminated by a measurement can it be known whether it is in the lower or higher state.

    Understanding superposition helps one understand the qubit, which is the basic quantum computing informational unit. Bits are transistors in conventional computing, and they can be in the states of 0 or 1, or they can be off. Qubits can exist in superpositions with fluctuating probabilities that can be affected by quantum operations during computations, in contrast to classical bits, which must always be in the 0 or 1 state.

    Quantum entities can be produced and/or altered to the point that none of them can be explained without referring to the others, a phenomenon known as entanglement. People's distinct identities are gone. When one examines how entanglement might endure over great distances, this idea is quite challenging to understand. It appears as though information can move faster than the speed of light because measurements on one member of an entangled pair will instantaneously affect measurements on its mate. Even Einstein called this apparent activity from a distance "spooky" because it was so unsettling.

    The common belief is that quantum computers accelerate computation by simultaneously attempting every solution to a problem. In reality, a quantum computer uses entanglement between qubits and the probabilities connected to superpositions to execute a series of operations (a quantum algorithm) in a way that enhances some probabilities (i.e., those of the right answers) while depressing or even eliminating others (i.e., those of the wrong answers). The likelihood of measuring the right result should be maximized when a measurement is done after a computation. What distinguishes quantum computers from classical computers is the way they make use of probability and entanglement.

  2. BLACK HOLE COMPUTING

    It may appear as though black holes are the exception to the notion that everything is calculable. Einstein's general theory of relativity states that getting information out of them is impossible, despite the fact that entering information into them is not difficult.

    When material enters a hole, it gets absorbed, and the specifics of its makeup are forever gone. Black holes are only the most extreme illustration of the universal law that the cosmos stores and processes information. If an object falls inside a black hole, some of the information regarding its identity is lost, such as the way it might have looked and so on. This poses a serious issue as to whether black holes follow the law of quantum mechanics of information conservation and is also called the black hole information paradox. A few scientists

    argue that the Hawking radiation is in fact processed information that is being left behind which shows that black holes do compute. Particles can flip one another whenever they come into contact.A black hole is nothing more or less than a computer that has been shrunk to its tiniest conceivable size if any piece of stuff is a computer. The gravitational attraction that a computer's parts exert on one another gets greater as it gets smaller and finally gets so powerful that nothing made of material can escape. The Schwarzschild radius, which measures the size of a black hole, is inversely related to its mass. The computer can still do 10^51 operations per second even after being shrunk since its energy content remains the same. The amount of memory does alter. When gravity is negligible, the total storage capacity is proportional to the volume and consequently to the number of particles. However, when gravity predominates, it links the particles, reducing their overall capacity to store information. A black hole's overall storage capacity is inversely correlated with its surface area.

    A one-kilogram black hole can store around 10^16 bits, according to calculations made in the 1970s by Hawking and Jacob Bekenstein of the Hebrew University of Jerusalema significant decrease from the capacity of the same computer before compression.

    The black hole makes up for this by being a considerably quicker CPU. In fact, the time it takes to flip a bit, which is between 10 and 35 seconds, is equivalent to the time it takes for light to travel from one side of a computer to the other. Thus, the black hole is a serial computer as opposed to the ultimate laptop, which is very parallel. It functions as a single entity. Black holes radiate at a rate that is inversely proportional to their size, which means that large black holes, such those in the centers of galaxies, consume energy considerably more slowly than they absorb matter. However, in the future, researchers could be able to drill microscopic holes in particle accelerators, and these holes should instantly erupt in a flash of radiation. A black hole might be conceptualized as a temporary assembly of matter doing computation at the fastet pace feasible rather than as a fixed entity. Nowadays, the majority of physicists believe that the radiation is a highly processed version of the data that was incorporated into the hole during its development. Despite the fact that matter cannot escape the hole, knowledge may.

    Entanglement, a quantum phenomenon in which the characteristics of two or more systems stay associated beyond the boundaries of space and time, is the escape route.

    Entanglement makes it possible for particles to teleport, essentially traveling from one point to another at speeds up to the speed of light thanks to the efficient flow of information between them. Two particles must initially get entangled in order to begin the teleportation process, which has been tested in the lab. Then, along with some matter that holds information to be transported, a measurement is made on one of the particles. Even if the measurement removes the information from its initial position, entanglement ensures that it remains on the second particle, no matter how far away. The measurement outcomes can be used as the key to decipher the data. For black holes, a similar process could be effective. At the event horizon, entangled photon pairs manifest themselves. One of the photons escapes and turns into the visible Hawking radiation to the viewer. Along with the material that originally caused the hole, the other falls in and collides with the

    singularity. The information contained in the matter is transferred to the emitted Hawking radiation by the annihilation of the infalling photon, which serves as a measurement. It is plausible that the first singularities within black holes have a solitary state, much as the original singularity at the beginning of the universe may have only had one potential state. One might programme a black hole to carry out any desired calculation by manipulating the material that falls into it. Spacetime characteristics are inseparably linked to black hole characteristics. Thus, spacetime itself may be viewed as a computer if holes can. According to quantum physics, spacetime is discrete like other physical systems. On tiny scales, spacetime is frothy and foamy, making it impossible to calculate distances and times with absolute accuracy. The size of the bits, which cannot be smaller than the foamy cells, determines how much information can be packed into a given area of space. A spacetime region's component cells are proportional to its size. This claim may initially seem contradictoryas if the atoms in an elephant were larger than those in a mouse. In actuality, Lloyd got it from the same rules that restrict computer power. A type of computation called mapping the geometry of spacetime uses transmission and processing of data to estimate distances. One approach to accomplishing this is to cover a portion of space with a swarm of GPS satellites, each of which is equipped with a clock and a radio transmitter. A satellite broadcasts a signal and counts how long it takes for it to arrive in order to calculate distance. The rate at which the clocks move determines how accurately the measurement is taken. Since ticking is a computational activity, the Margolus-Levitin theorem, which states that the time between ticks is inversely proportional to energy, determines the maximum rate at which it may occur. The energy itself has a finite amount. The satellites will create a black hole and cease to function if you provide them with too much energy or cluster them too tightly together. The hole will continue to generate Hawking radiation, but since the wavelength of that radiation is equal to the size of the hole, it is useless for mapping features on a finer scale. The radius of the area being imaged directly relates to the maximum total energy of the satellite constellation. As a result, the energy grows more gradually than the region's volume. The cartographer must choose between two inescapable trade-offs as the area expands: decrease satellite density (so they are farther apart) or decrease energy available to each satellite (so that they have slower- moving clocks). In either case, the measurement loses accuracy. Any given region of space appears to have a maximum quantity of information it can hold that is proportional to its surface area rather than its volume. The holographic principle is typically believed to result from the unresolved aspects of quantum gravity, although it also immediately derives from the basic quantum bounds on measurement accuracy. The cosmos is the greatest computer, and the concepts of computation may be applied to it as well as to the smallest and most compact computers (black holes and spacetime foam). The cosmos may have an infinite size, but at least in its current form, it has been around for a finite amount of time. Tens of billions of lightyears are now the size of the visible region. There must have been a computation performed within this area for us to be able to know the outcomes.The number of bits may be a little more if the particles have any interior structure. The conventional matter is a highly parallel computer, similar to the ultimate laptop but unlike the black hole, because these bits flow faster than they can interact with

    one another. Dark energy functions in a very different way from conventional matter, which goes through a very limited set of processes. If it uses the maximum amount of bits permitted by the holographic principle to encode, the vast majority of those bits have only had a chance to flip once throughout cosmic history. As a result, the smaller number of conventional bits do calculations at considerably greater rates than these unusual bits, who are only bystanders. Whatever the dark energy is, it isn't calculating very much. It's not necessary. Both adding the universe's missing mass and speeding its expansion are straightforward computer jobs. The universe computes quantum fields, chemicals, germs, people, stars, and galaxies using Standard Model software. It computes while mapping out its own spacetime geometry with the greatest accuracy permitted by physics. Existence is computed.

  3. QUANTUM ELECTRODYNAMICS

    By fusing the principles of quantum mechanics with special relativity, quantum field theory (QFT) offers a distinctive viewpoint on the nature of the natural world, its elements, and how it functions. The Standard Model of elementary particles, which includes all known particles and their interactions with one exception for gravitygravityis likewise based on QFT. Experimental tests involving scattering are performed to verify theoretical hypotheses. With traditional computers, calculating scattering amplitudes is a difficult process. Calculations at weak coupling are achievable with the aid of a perturbative expansion using Feynman diagrams. Lattice field theory is employed for strong coupling and is exponentially more difficult as the number of sites increases. In a quantum system, there are two methods to process information. One method is to discretize the information and store it in discrete systems, such as the electron's spin or the photon's polarisation. On quantum systems with discrete spectra, quantum algorithms are applied. Continuous variables (CV), which were initially proposed by Lloyd and Braunstein and rely on quantum systems where the observables have continuous spectra, such as the location and momentum of a particle or the quadratures of an electromagnetic field, are another technique to encode the information.

    For instance, the predictable creation of multipartite entangled states and the measurement of high fidelity, given current technology, were advantages of the expansion of the discrete formalism of cluster state protocols to the continuous formalism. Large entangled states for scalable quantum information and quantum computing, a dynamical squeezing gate for universal quantum information processing, and other proof-of-principle experimental demonstrations are all products of the physical realization of CV cluster-state quantum computing.

    A semicondctor with a bigger band gap is placed in three discs of another semiconductor. The middle disc is higher than any of the outside ones. Because the barriers between the discs are so small, an electron may quickly tunnel through them. A quantum dot is a structure made up of a set of three discs and the two barriers in between. Each QD that will take part in quantum computing needs just one electron.

    Ac Stark shifts and terms that do not meet resonance requirements are ignored by the effective two-photon Hamiltonian.

    A sequence of voltage pulses delivered across the gates of a pair of qubits causes the C-NOT action. A universal quantum computer must meet a number of characteristics, one of which is the capacity to perform a C-NOT operation. These are additional prerequisites. To load information into the qubit registers at the start of a quantum computation, unrestricted rotations of the state vectors of the qubits are necessary. Each qubit's state must be determined after a quantum computation. One of us has suggested a terahertz quantum-well-based adjustable antenna-coupled intersubband detector. Terahertz photons are effectively absorbed and detected by this device, but only within a limited bandwidth centred on the intersubband absorption frequency. With the introduction of mild electric fields, such as those employed to tune the transition frequencies of the qubits in this proposal, this frequency is Stark adjustable. We suggest adding such a detector to the cavity.

    The processing of quantum logic gates in parallel is required by existing systems for error correction.

    By extending the cavity to provide many cavity modes in the frequency range across which QD energy level spacings are adjustable, one may consider parallelizing the technique shown here. A more exciting idea is to combine a nonlocal system like the one put out here with a nearest-neighbor linked semiconductor scheme for quantum processing. In this scenario, nearest-neighbor interactions would be used to create parallel logic gates in clusters of qubits, whereas cavity photon- mediated long-range interactions would be used to provide serial communication between qubits in distant clusters.

    This is the most challenging difficulty that most quantum computers face. Decoherence of the QD's electronic state as well as that of the cavity photons must be taken into account in the computer that is being presented here.

    On the decoherence of electronic intraband excitations in solitary QDs loaded with a single electron, there are no experimental data. Because the QDs are in a 3D cavity with a very high quality factor, the emission of freely moving photons is eliminated. interaction between the two gate electrodes connected to each QD and their fluctuating potentials.

    Because the QDs are in a 3D cavity with a very high quality factor, the emission of freely moving photons is eliminated. interaction between the two gate electrodes connected to each QD and their fluctuating potentials. A superconductor serves as the material for the gate electrodes. When a QD is not performing a logic operation, a superconducting route connects its two gates to a superconducting ground. There are no thermal fluctuations since there is no dissipation. The gate electrodes also act as a filter for electric fields produced distant from the QD. The connection to the superconducting ground needs to be severed while a QD is flipped. A C-NOT operation's precision will be impacted by low-frequency noise that happens during this time. In a subsequent article, these and additional potential C-NOT operation problems, as well as potential solutions to fix them, will be examined. The energy levels and matrix components of various dots will be slightly different. Both geometrical differences between the quantum dots and the existence of quenched disorderstatic charged defectscan cause this inhomogeneity. Decoherence of quantum bits is not caused by inhomogeneity or static disorder. However, each quantum dot in the quantum computer will need to be calibrated before a quantum computation is done in order

    to execute correct one- and two-bit operations in an inhomogeneous population of quantum dots (for example, by executing a C-NOT operation!). Keep in mind that all solid- state quantum computer implementations will require calibration to combat disorder.

    A cavity photon's lifetime must be long enough to support several high-fidelity C-NOT operations. The creation of few- mode THz cavities with very low loss will be necessary for this. Without knowledge of the design of the quantum computer, which is outside the purview of this research, as well as material parameters that have not yet been tested, it is impossible to analyse the predicted cavity losses. It is conceivable that holes created of traditional metals will cause unacceptable losses. Dielectric cavities, manufactured, for example, from ultrapure Si, are one alluring prospect. The free- carrier absorption that now dominates measurements of optical loss in silicon at THz frequencies can be removed by purifying and cooling the silicon.

    Two phonons are produced by a THz photon, which causes residual losses in silicon. To our knowledge, these tiny THz losses have not been estimated or observed.

    Each dot's energy levels are controlled by certain gate electrodes. A cavity containing THz photons functions as a data highway that can connect any two quantum dots. Any two quantum bits in the computer can be affected by a C-NOT operation using a series of adiabatic voltage pulses delivered to individual quantum dots. This specific suggestion for a quantum information processor is intended to encourage theoretical and experimental work.

    The challenges of implementing this idea for quantum computation are significant, as they are for all of them. The creation of novel QD types that can be gated and loaded with a single electron, the fabrication of few mode THz cavities with a very high Q, and the detection of single THz photons are among the most difficult tasks.

    We are hopeful that these issues will be addressed in the not too distant future despite the fact that each of these deserving problems is beyond the state of the art at the moment due to the tremendous speed of development in materials science and THz technology.

  4. QUANTUM CHROMODYANMICS

Since QCD is a quantum field theory, one must deal with mathematical notions that are unknown to many physicists even from the beginning. Relativistic perturbation theory, which was created for quantum electrodynamics (QED) sixty years ago by Feynman, Schwinger, Tomonaga, and others, is the most popular theoretical tool for quantum field theories. The Glashow-Weinberg-Salam theory, which unifies QED with the weak nuclear force, and the mathematical theory with experiments in QED are connected fundamentally through perturbation theory.

Calculating how virtual pairs lead to a distance dependency on the coupling in quantum gauge theories is a basic example of perturbative quantum field theory. The QCD coupling is modest enough at short distancesor, equivalently, high energiesthat perturbation theory is once again applicable. For instance, it is possible to accurately calculate the cross section for an electron-positron pair to annihilate and generate a quark- antiquark pair as a succession of power in s. Each quark and

antiquark breaks off into a jet of hadrons, primarily pions but also including some protons and neutrons. The quark and antiquark are responsible for the jets' overall characteristics, including their energy flow and angular distribution. Quark characteristics may be measured in this method, and they are found to match up with theoretical calculations. The Standard Model of elementary particles is built on QCD and the electroweak theory. The chromodynamic and electroweak gauge symmetries, together with the corresponding quark and lepton quantum numbers, are so well-established that many particle physicists regard them as natural laws. We areunable to imagine a more complete theory that does not include them. The Standard Model, however, encompasses more. We are aware that something must create masses for the quarks and leptons as well as spontaneously breach the electroweak symmetry (perhaps the same thing). Interactions in the Standard Model model these phenomena. Although the model interactions are insufficient, they are consistent with the findings.

Although the model interactions are insufficient, they are consistent with the findings. The Large Hadron Collider (LHC), scheduled to begin operations at CERN later this year, is expected to provide particle scientists with new knowledge. The only thing in this world would be a swarm of photons and neutrinos surrounding neutron stars. Due to the fact that neutrons are heavier than protons, our cosmos is not at all like this. The neutron-proton mass differential may be traced via QCD to the down-up quark masses and finally to the underlying origin of quark masses. Many particle theorists are interested in finding a straightforward explanation for the pattern of quark masses since it is inescapably complex.

The masses of electrically charged particles and the fine structure constant are the basic parameters in quantum electrodynamics. Similar to this, the fundamental parameters of quantum chromodynamics are the masses of coloured particles (the quarks) and the strong couplings.

The foundation of contemporary nuclear physics is QCD. Protons, a bound state of two up quarks and one down quark, form the simplest nucleus. One up quark and two down quarks are linked together to form the neutron. The constraining force is enormously powerful. In contrast, the

A byproduct of the basic chromodynamic force is the force that is conventionally referred to as the strong nuclear force. Van der Waals forces between molecules, which are electromagnetic forces between neutral substances with structure that cause an electric charge distribution, are comparable to this. Therefore, some fundamental issues include studying the excitation spectrum of nucleons (and their cousins with strangeness and other flavors) and understanding nucleon structure directly from QCD. Going beyond perturbation theory is important for strong coupling. This is challenging in a relativistic quantum field theory like QCD. The fact that every point in a quantum field has one (or maybe several) degrees of freedom is a significant barrier. As a result, there are an endless number of degrees of freedom. Additionally, if naively combined together, the oscillations at close range provide an ultraviolet divergence to anything of relevance.

Renormalization is a collection of concepts that addresses these fluctuations. A tool that (properly) handles both tightly linked fields and the renormalization of short-range

fluctuations must be found, in other words. Since quarks are fermions, they must adhere to the Pauli exclusion principle. This is addressed in the functional integral formalism by including fermion-specific anticommuting Grassmann variables. The integration is now a formal process known as Berezin integration rather than Riemann (or Lebesgue) integration. To cut a long tale short, we always have the option of performing the Berezin integration manually in lattice QCD. Although it is not tangible, the lattice is a useful tool in mathematics and computation.

It is necessary to calculate the integrals for a series of lattices in order to achieve a genuine result from the chromodynamics of continuous spacetime. The majority of lattice-QCD computations include two phases. First, an ensemble of gluon fields with a certain lattice spacing and quark mass must be created. The extremely computationally intensive det M is needed for this. One requires several hundred, or even a few thousand, samples of the gluon field to create an ensemble of usable size. For this phase, the best computers that can be found are required.

CONCLUSION

Due to its experimental benefits over discrete variables (qubits), quantum computation based on a continuous-variable architecture (qumodes) has drawn a lot of attention recently. However, compared to discrete-variable algorithms, quantum algorithms based on continuous variables are far less advanced. By expanding the quantum algorithm for self-interacting chargeless scalars of Ref, we developed a continuous variable quantum method for the computation of scattering amplitudes of large charged scalars linked to photons. One of the most difficult QFT issues is calculating the scattering amplitudes, particularly when there are many strongly interacting particles. The quantum algorithm offers an exponential speedup over well-known classical algorithms based on lattice gauge theory, similar to the discrete-variable case. We will be able to comprehend particle interactions beyond perturbation theory thanks to such quantum algorithms. Compared to chargeless scalars arising from gauge invariance, working with quantum electrodynamics was more challenging. We demonstrated how one might guarantee gauge invariance by imposing it on a system that is not interacting and then adiabatically activating the electric coupling constant. In addition, our gauge selection produced an interaction Hamiltonian that was independent of the fields' conjugate momenta and only dependent on the fields. As in the case of scalar field theory, it might thus be implemented using higher-order (non-Gaussian) phase gates. Scalar self-interaction terms might have been introduced easily, but we chose not to do so.

In order to comprehend the dynamics of electrons and other particles, it would be intriguing to expand our method to fermionic fields. In order to comprehend (weak and strong) nuclear forces, it is also essential to build a comparable

approach for non-Abelian gauge quantum field theories. Work is now being done in this direction.

REFERENCES

[1] M. Reck, A. Zeilinger, H. J. Bernstein, and P. Bertani, Experimental Realization of any Discrete Unitary Operator, Phys. Rev. Lett. 73, 58 (1994).

[2] A. Yu Kitaev, A. H. Shen, and M. N. Vyalyi, Classical and Quantum Computation (American Mathematical Society, Providence, 2002).

[3] J. E. Avron and A. Elgart, Adiabatic theorem without a gap condition: Two-level system coupled to quantized radiation field, Phys. Rev. A 58, 4300 (1998); G. K. Brennen, P. Rohde, B. C. Sanders, and S. Singh, Multiscale quantum simulation of quantum field theory using wavelets, Phys. Rev. A 92, 032315 (2015).

[4] C. K. Hong and L. Mandel, Experimental Realization of a Localized One-Photon State, Phys. Rev. Lett. 56, 58 (1986).

[5] A. Migdall, S. V. Polyakov, J. Fan, and J. C. Bienfang, Single-photon generation and detection physics and applications, Exp. Methods Phys. Sci. 45, 1 (2013).

[6] K. Marshall, R. Pooser, G. Siopsis, and C. Weedbrook, Repeat-until- success cubic phase gate for universal continuous-variable quantum computation, Phys. Rev. A 91, 032321 (2015).

[7] H.-K. Lau, R. Pooser, G. Siopsis, and C. Weedbrook, Quantum Machine Learning over Infinite Dimensions, Phys. Rev. Lett. 118, 080501 (2017).

[8] S. Yokoyama, R. Ukai, S. C. Armstrong, J.-i. Yoshikawa, P. van Loock, and A. Furusawa, Demonstration of a fully tunable entangling gate for continuous-variable one-way quantum computation, Phys. Rev. A 92, 032304 (2015)

[9] H.-K. Lau and C. Weedbrook, Quantum secret sharing with continuous- variable cluster states, Phys. Rev. A 88, 042313 (2013).

[10] N. C. Menicucci, P. van Loock, M. Gu, C. Weedbrook, T. C. Ralph, and

M. A. Nielsen, Universal Quantum Computation with Continuous- Variable Cluster States, Phys. Rev. Lett. 97, 110501 (2006).

[11] S. L. Braunstein and H. J. Kimble, A posteriori teleportation, Nature (London) 394, 840 (1998).

[12] F. Grosshans, G. V. Assche, J. Wenger, R. Brouri, N. J. Cerf, and P. Grangier, Quantum key distribution using Gaussian modulated coherent states, Nature (London) 421, 238 (2003).

[13] S. L. Braunstein and P. van Loock, Quantum information with continuous variables, Rev. Mod. Phys. 77, 513 (2005).

[14] Y. Okuyama and N. Tokuda, Phys. Rev. B 40, 9744 ~1989!. [15] R. J. Luyken et al., Physica E ~Amsterdam! 2, 704 ~1998!.

[16] L. Degiorgi, G. Briceno, M. S. Fuhrer, A. Zettl, and P. Wachter, Nature

~London! 369, 541 ~1994!.

[17] T. Fujisawa et al., Science 282, 932 ~1998!.

[18] A. G. Huibers et al., Phys. Rev. Lett. 81, 200 ~1998!

[19] T. Sleator and H. Weinfurter, Phys. Rev. Lett. 74, 4087 ~1995!

[20] C. L. Cates et al., in Proceedings of the Ninth International Symposium on Space Terahertz Technology, edited by W. R. McGrath ~Jet Propulsion Laboratory, Pasadena, 1998!, p. 597.

[21] A. Steane, Rep. Prog. Phys. 61, 117 ~1998!.

[22] Bernard C et al. (MILC) 2008 Phys. Rev. D 77 014503 [23] A. M. Steane, Phys. Rev. A 54, 4741 ~1996!

[24] Buettiker P, Descotes-Genon S and Moussallam B 2004 Eur. Phys. J. [25] Yao W M et al. (Particle Data Group) 2006 J. Phys. G 33 11232