tisdag 26 oktober 2010

On the background of matter.

Kea (Marni Sheppeard) has a fantastic blog. She says she will solve the quantum gravity, and her approach is totally different from all others. She uses the Koide matrix, further developed by Carl Brannen. Kea and Carl are both working on the Koide interpretation. This text is a trial to understand the physics behind, but I am no physicist, so this is by necessity superficial, but I think this interpretation is extremely important for the understanding of the processes behind quantum biology, such as superconduction, fractional quantum Hall effects, quantum tunneling, information encoding etc.

Microscopic background?
Is the geometry of space a macroscopic manifestation of an underlying microscopic statistical structure? Is geometrodynamics - the theory of gravity - derivable from general principles for processing information, asks Ariel Caticha 2005 in “The Information Geometry of Space and Time”? A model of geometrodynamics based on the statistical concepts of entropy, information metric, and entropic dynamics. The model shows remarkable similarities with the 3+1 formulation of general relativity. There is, in addition, a gauge symmetry under scale transformations. The general theory of relativity (GR) can be derived from an underlying “statistical geometrodynamics” in much the same way that thermodynamics can be explained from an underlying statistical mechanics?

The geometry of space is the geometry of all the distances between test particles and this geometry is of statistical origin. We deal with probability distributions and not with “structureless” points. (The statistical geometrodynamics developed here is a model for empty vacuum; it does not include matter.) The information geometry we introduce does not define the full Riemannian geometry of space but only its conformal geometry. This appears at first to be a threat to the whole program but it turns to be just what we need in a theory of gravity. GR can be obtained in some appropriate limit. But with scale invariance. The scale factor σ(x) needed to assign a Riemannian geometry to space is arbitrary chosen. The essence of the dynamics of GR lies in the embeddability of space in spacetime. The σ(x) can be chosen so that the evolving geometry of space sweeps out a four-dimensional spacetime.

Perhaps the most interesting are the revision it requires of the notion of distance, the statistical structure of both time and space, and the recognition that spacetime is not a fundamental notion. The statistical nature of geometry could provide mechanisms that would eliminate the infinities pervading quantum field theories, either through decoherence or through the finite number of distinguishable points within a finite volume. Furthermore, it would make the Lorentz and CPT symmetries have only statistical validity and it might bear on the subject of CP violation and matter-antimatter asymmetry. On the other hand, the scale invariance might be relevant to cosmological issues such as the early inflation and the late accelerated expansion of the universe.

This is what I shall discuss. The notion is very much like TGD. Notice there is a need for two sheets of space, space and spacetime, matter and antimatter/dark matter/statistics and entropy.


The background space, a non-linearly interacting scalar field, the Higgs field or Skyrme field, is usually assumed to be non-local. There is a field of energy sometimes called the vacuum energy, zero energy ontology, the Higgs field etc. These field potentials (homogenous and isotropic 'vacuum') are suggested to explain the expanding space, measured as instance from gravitational lensings coming from dark matter (DM). This DM is making a dimming effect when we look back in spacetime. But DM is not understood. Usually it is claimed to be WIMPs (weakly interacting particulate matter). The result is a cosmological constant interpreted as DE, about 75% of all energy. A very big unexplained 'constant', indeed. It is linked to gravity usually. "Dark energy is just possibly the most important problem in all of physics", says Michael Turner in “Dark Energy and the New Cosmology”.

Dark energy has the following defining properties:
- it emits no light;
- it has large, negative pressure, pX is about-pX; and
- it is approximately homogeneous (more precisely, does not cluster significantly with matter on scales at least as large as clusters of galaxies).
Because its pressure is comparable in magnitude to its energy density, it is more \energy-like" than \matter-like" (matter being characterized by p >>p). Dark energy is qualitatively very different from dark matter. Sufficiently large negative pressure leads to repulsive gravity (the second Friedmann equation).


Frank Wilczek talks of the need for a ‘grid’ to explain the vacuum.

It is about interpretation, says Kea. Look at the work of Starkman et al on extra galactic plane subtractions from the WMAP data that suggest the zero angular correlation beyond an angle of roughly 60 degrees, in agreement with Riofrio's cosmology.

D.L. Bennett 1996, thesis. Multiple Point Criticality, Nonlocality and Finetuning in Fundamental Physics: Nature - in for example a field theory - seeks out values of action parameters that are located at the junction of a maximum number of phases in a phase diagram of a system that undergoes phase transitions. This is multiple point criticality, or non-locality. This latter often takes the form of a lattice, and the consistency of any field theory in the ultraviolet limit requires an ontological fundamental scale regulator. A gauge group is taken to be a Planck scale predecessor with no assumed symmetry.
The AGUT gauge group SMG3 has one SMG factor for each family of quarks and leptons. Ambiguities that arise under mappings of the gauge group SGM3 onto itself result in the Planck scale breakdown to the diagonal subgroup of SMG3. The diagonal subgroup is isomorphic to the usual standard model group. In the context of a Yang-Mills lattice gauge theory, the principle of multiple point criticality states that Nature seeks out the point in the phase diagram at which a maximum number of phases convene. This is the multiple point. The physical values of the three SMG gauge couplings at the Planck scale are predicted to be equal to the diagonal subgroup couplings corresponding to the multiple point action parameters of the Yang Mills lattice gauge theory having as the gauge group the AGUT gauge group SMG3.

This gives a geometric trinity that can be interpreted as a categorical group theory?

Graham Dungworth: The tetrahedral form would provide the lowest energy state, more stable than a polyneutron. It's a 15 quark assemblage. Fifteen quarks as proton quarks for lepton conservation would require 5 electrons. This 15 quark assemblage conserves lepton number as the overall charge for the trimer is +1. Energy wise is a four fold 4 *0.511 MeV saving which fortuitiuously is similar to that for the H2 formation ca. 2 ev below H states.

Matti Pitkänen, TGD: The 'holy' trinity is one basement.

On the interaction and massivation.
Richard M. Weiner “The mysteries of fermions”. It is conjectured that all known fermions are topological solitons. No bosonic leptons or baryons have yet been found, although predicted by SUSY. Another specific property of fermions is the Pauli exclusion principle, which states that, unlike in the case of bosons, a given quantum state cannot be occupied by more than one fermion. Maybe these both has a common ground in physics, through the Skyrme mechanism, analog to the Higgs mechanism?

Gerardus THooft, Nobel lecture: The mass generation mechanism should not be seen as spontaneous symmetry breaking, since in these theories the vacuum doesn't break the gauge symmetry. 'Hidden symmetry' is a better phrase. Or the Higgs mechanism.

But Kea/Marni says there is no fairy fields, and no whackalinos (WIMPs), no DM, no Higgs boson, no constant speed of light. Her background is composed of algebra and logics. The braid pictures do not directly describe a physical space, but rather categorical objects, non-locality is very much a feature of the categorical picture, starting with entanglement for ordinary quantum mechanics. Action at a distance requires a deeper discussion. The idea of a changing c is supposed to be an observer dependent one ... us being the observers. So in that sense there is no absolute time and distance. An old fashioned space is best thought of as an infinite dimensional category, built from paths, and paths between paths, and paths between paths between paths, and so on. That is, classical geometry is complicated precisely because it is 'unnatural'.

WAU, what a statement! She says:

My opinion happens to be different. Since I do not believe that Quantum Gravity is based on classical symmetry principles, I have no reason to take the Higgs boson seriously. Moreover, I am well aware of its effective role in a local theory, but that theory just simply does not apply to quantized masses ... just like Newton's laws break down if you push them too far. A cosmological constant is not an explanation of anything. It is an ad hoc constant of integration. The static universe was a mistake then, and the Dark Force is a mistake now. There is no inflation. Louise's infinite c condition explains the horizon problem perfectly well, and gives PREDICTIONS. Riofrio has unearthed much evidence that c has varied. Of course it is a conversion factor ... and hbar must also vary. Also the fine constant alpha? And you are beginning to see that this c variation is a statement about mass. Compton lengths has to be considered. In fact, there is a conformal boundary so that we can say M=t in Planck units, where t is a cosmic time parameter which one sets to zero in the Big Bang epoch. Alternatively, Riofrio's law is a form of Kepler's law, the general validity of which you probably do not doubt.

It is a radically new wiev, different from every ‘New Cosmology’ model. Graham Dungworth tries to make the new approach compatible with the SM on galaxyzoo forum.

Louise usually discusses her theory in the context of General Relativity, with which I suspect you are not arguing. In my opinion, once we start talking about a varying hbar (keeping alpha constant) we are outside General Relativity, as a mathematical formalism. However, even Einstein believed that 19th century Riemannian geometry was inadequate to capture the underlying principles of relativity. Here we have two basic options: (i) Louise's varying c rule or (ii) the favoured Dark Force within conventional geometry, fixing c arbitrarily. The claim is that (i) is far, far simpler and more natural.

Matti Pitkänen, TGD, uses 2-surfaces and Fermi surface analogs (the’ Big Book’) for the job. Leakage between surfaces do the braidings. “p-Adic length scale hypothesis and CP breaking provide a simple explanaton once again. There is already old evidence that neutrinos can appear in several p-adic length scales. Suppose that the p-adic mass squared scales of neutrinos and and antineutrinos differ by 2 (p =about 2^k by p-adic length scale hypothesis). This means k(nubar)= k(nu)+1.” The argument that hbar and c are absolute constants is correct only in GRT framework and in standard quantum mechanics.
The Fermi analog gives a coupling to condensed matter physics. Matti has also an own ‘Higgs mechanism’ for the massivation. The concept of mass naturally arises in p-adic models as inverse transition probability with a dimensional constant of proportionality.
Inflationary scenario is a possible explanation for flat spacetime but plagued by extra-ordinary clumsy Higgs potentials and loss of predictivity. TGD explanation is based on quantum criticality. At quantum criticality there are no scales and this translates to the vanishing of 3-D curvature tensor and flatness of 3-space during the phase transition which corresponds to an increase of Planck constant scaling up quantum scales. Cosmological expansion would indeed occur as rapid quantum phase transitions in TGD Universe rather than as smooth continuous expansion. Bubbles within bubbles. TGD cosmology is fractal containing cosmologies within cosmologies in all scales and the scaled version of the same universal critical cosmology allows also to model what happens at RHIC. It would be important to test this cosmology since it is universal unlike the inflationary scenarios.

The negative pressure expansion is not due to negative kinetic energy as in quintessence scenario nor due to the addition of cosmological constant term to Einstein's equations but simply due to the constraint force taking care that space-time surface is surface in M^4xCP_2. One cannot of course exclude phenomenological modeling in terms of cosmological constant.

As I explained the measured c, I call it c# has c only as upper bound in sub-manifold gravitation. Argument is childishly simple and explains the apparent variation of Earth-Moon distance, which is something GRT people should take with utmost seriousness.

In the case of Planck constant TGD predicts that the Planck constant for single sheet for the covering is n-fold multiple of the ordinary one. If one treats the whole bundle of sheets as single object it is the ordinary one but now one has to find a manner to treat the system non-perturbatively. This is by no means prevented by conversion factor identification. Also the more radical option in which hierarchy of Planck constant does not follow from basic TGD is completely consistent with conversion factor interpretation.


Speed of light.
Luoise Riofrio, okt 2010, The c Change Continues . speed of light Dr. Daniel Gezari interests include tests of Lorentz invariance and searching for evidence of a preferred reference frame for light. Lunar Laser Ranging Test of the Invariance of c . The speed of light c as 8 m/sec less than its formerly accepted value. Perhaps c has slowed since the canonical measurement? Even more surprising, Gezari claims that c varies by as much as 200 m/sec depending on the observer's direction. This would undermine Lorentz Invariance, which says that c is the same regardless of direction. 'the speed of light seems to depend on the motion of the observer' (blue and redshift) undermine the basis of Special Relativity.

Matti: It does not make sense in GRT framework. “In general relativity one can only say that light moves along light-like geodesics. Riofrio's formula also leads to non-sensical results since it leads to time variation of elementary particle masses. In TGD framework the sub-manifold geometry allows to speak about varying c, call this velocity c#. These times are in general different since the motion along light-like geodesic of the space-time surface is in general longer than along the light-like geodesics of imbedding space and depends on how wiggly the space-time sheet is. The value of c# equals for Robertson-Walker cosmology to c#=sqrt(g_aa) and increases since space-time approaches flatness (cosmological t=0?). (OBS, the negative pressure was calculated by Friedmanns cosmology).

Kea: I don't think Louise's picture is so different to yours really. Remember that we are always mixing up several notions of time here. Your picture of increasing c could be viewed as a forward time evolution in Louise's picture. Since I take stringy dualities seriously, this actually makes sense.

Witten: For massive particles a vector current could create either vector or scalar particles, for massless particles though, Lorenz invariance forbid creation of vector particles, only scalar particles are allowed.

Graham D: Wherever we might be in the universe we are confronted by an absolute frame of reference from the neutrino temperatures; of mass energy continually created and destroyed which currently is blue shifted towards us, wherever we are of incoming neutrinos at -1315 km/second contra to the Hubble flow, where the size factor for compression =1 +z , which we use to determine matter DM outflow. This is reminiscent of part of the steady state model that attempted to create new particle mass , eg. the H atom from space to balnce the eventual disppearance of the outflow of galaxies to beyond the horizon. Today, the alternative explantion for big Delta is an accelerated expansion from supernova standard candles to hyperbolic geometry. Irrespective of absolute neutrino massess this low cumulative mass, even should it change will question Lorentz invariance. Yet if complete the cycle of creation and eventual destruction and rebirth, all will find comfort.

Lunar Laser-Ranging Detection of Light-Speed Anisotropy and Gravitational Waves Reginald Cahill LLRE shows a variation in c. He also claims a correlation with spacecraft flyby anomalies. LLRE has reported the Moon's semimajor axis increasing by 3.82 cm/yr, anomalously high. If the Moon were gaining angular momentum at this rate, it would have coincided with Earth about 1.5 billion years ago. Apollo lunar samples (which this scientist has had the honour to touch) show that the Moon is nearly old as the Solar System, over 4.5 billion years. Measurements using tidal sediments and eclipse records show that the Moon is receding at about 2.9 cm/yr. LLRE disagrees with independent measurements by 0ver 0.9 cm/yr, a huge anomaly.

If the speed of light is slowing according to GM=tc^3, the time for light to return will increase, making the Moon appear to recede faster as measured by LLRE. Predicted change is 0.935 cm/yr, precisely accounting for a 10 sigma anomaly acceleration caused by "dark energy." The explanation for DE and the apparent acceleration of redshifts is something a child could understand. Since redshifts are related to the speed of light, it is not the Universe accelerating but the speed of light slowing down.

Ed Witten 2010 “A new look at the path integral of quantum mechanics”, abstract:
The Feynman path integral of ordinary quantum mechanics is complexified and it is shown that possible integration cycles for this complexified integral are associated with branes in a two-dimensional A-model. This provides a fairly direct explanation of the relationship of the A-model to quantum mechanics; such a relationship has been explored from several points of view in the last few years. These phenomena have an analog for Chern-Simons gauge theory in three dimensions: integration cycles in the path integral of this theory can be derived from N=4 super Yang-Mills theory in four dimensions. Hence, under certain conditions, a Chern-Simons path integral in three dimensions is equivalent to an N=4 path integral in four dimensions.

And in The Problem Of Gauge Theory 2008 he says: Based on real experiments and computer simulations, quantum gauge theory in four dimensions is believed to have a mass gap. This is one of the most fundamental facts that makes the Universe the way it is. The mass gap means simply that the mass m of any state (orthogonal to the vacuum) is bounded strictly above zero. There is no mass gap in electromagnetism; the photon is massless, so electromagnetic waves can have any positive frequency. That is why we can experience light waves in everyday life.
By contrast, the mass gap in strong interactions means that the minimum frequency needed to probe the world of SU(3) gauge theory (which describes the strong interactions) is mc2/~, where m is the smallest mass.
Taking from experiment the value of the smallest mass, this frequency is of order 10^24 sec−1, which is high enough (with room to spare) that this world is way outside of our ordinary experience. While it is very large compared to our ordinary experience, the mass gap is in one sense very small: it is zero in the asymptotic expansion. As a result, we do not have a really good way to calculate it, though we know it is there from real experiments and computer simulations.
So in short, this mass gap is one of the most basic things that makes the Universe the way it is, with electromagnetism obvious in everyday life and other forces3 only accessible to study with modern technology. The mass gap is the reason, if you will, that we do not see classical nonlinear Yang-Mills waves. They are a good approximation only under inaccessible conditions.

DM –fuzz?
Kea do not believe in the Dark Force, and by that she means the DE-approach in SM. But it doesn’t rule out the DM-problem. This is because the DM-problem involves also antimatter scenarios and anionic Yang-Mill massgap phases. The wavefunction does indeed change as you move from point to point by the phase angle, not an observable but crucial to Yang and Mills.

Graham D: Unlike the Abelian U(1), SU(2) is difficult in that there's the rotations in both complex planes of an inner space with those of weak isospin space – non-Abelian and non commutative. For every particle there is an antiparticle and these have opposite attributes or truths. The attributes or truths of the electron are mass charge, electric U1 charge and weak isospin charge, handled the gauge symmetry SU(2). There's no colour charge, that's covered in SU(3) symmetries. The initial Minos results appear to contradict the standard model and these truths. Dark matter is not just mirror matter. The standard model is recovered by those four neutrino masses, but they are two pairs, not a threesome and a oneness. Nor does the mirror or former mirror properties, before the Minos data, preserve parity. Those neutrinos aren't born equal, some are more massive and less numerous than others. Additionally, they cannot exist in isolation; they are in equilibrium with photon energies at all temperatures.

There is no Higgs but DM. The conventional interpretation is that the inertial force or residual particle mass (energy) is given by the Higgs fields and scalar potentials ( 2 of them). There are actually 4 since two of these are spatially complex dimensions. Of the 4 fields there is one field that gives rise to the so called Higgs boson, the lowest energy potential. It's a top down theory where the empirically discovered mass of the top quark was made possible by the theoretical treatment of the Higgs field components at energies of ca 100GeV (cf, the proton mass ca. 1GeV is 95% approximated by the gluon fields).

One ‘allo’ neutrino (Grahams def of gen 1), only one of which is associated with postulated dark matter. What happens to the other two? They belong to the other phase that is the vacuum, tied into it. Ordinary matter and dark matter consort together, of opposite parity, mirror forms but their mass charges are different and consequently their abundances are different in the vacuum vapour. Because their abundances are different ca. 8 fold more DM than OM, the vacuum as a consequence is enriched in left handed chirality, because the DM removes right handed species. Hence you can't treat mirror chirality soley with real particles in Nature without addressing the pulling or pushing power of the vacuum.

Kea and Graham talk of 'false vacuum'.

Unless all charges are matched particle antiparticle pairs don't annihilate. A u quark annihilates with a u bar antiquark but not with say a dbar antiquark since they carry different mass charges as well as all other conjugate charges. The standard model doesn't predict everything of course eg as to numerosity of generations nor why there are these pairings between leptons and hadrons and you need two measured masses to predict others nor the magnitude of charges. Neutrinos don't carry electric charge or colour charge. There's no pattern there nor why they all carry the gravitational charge. This latter force or interaction as mediated by a graviton boson, spin2! although ignored in the standard model is a truth all particles carry.

This gives a very interesting situation. Einstein, Lorenz invariance, Pauli exclusion and Heisenberg’s uncertainty are reinterpreted, maybe also Feynman? No wonder the theoretical physics has made no progress lately. Is this probable? No, say most. But something is seriously wrong, that almost all agree upon. So let’s look further on this issue.

References.
D.L. Bennett 1996, thesis. Multiple Point Criticality, Nonlocality and Finetuning in Fundamental Physics . http://www.arxiv.org/abs/hep-ph/9607341v1 the anti-GUT approach.
Carl Brannen. Spin Path Integrals and Generations. 2010 http://arxiv.org/abs/1006.3114v1 and Found. Phys. (2010), dOI: 10.1007/s10701-010-9465-8.
C. A. Brannen (2008), http://www.brannenworks.com/koidehadrons.pdf .
Reginald Cahill, Lunar Laser-Ranging Detection of Light-Speed Anisotropy and Gravitational Waves http://arxiv.org/pdf/1001.2358
Ariel Caticha, The Information Geometry of Space and Time. http://arxiv.org/abs/gr-qc/0508108v2
Jerry Decker, comment on Riofrios blog.
Graham Dungworth, http://www.galaxyzooforum.org/index.php?topic=277933.0. Dark matter tread.+ Minos neutrinos tread.
Daniel Gezari Lunar Laser Ranging Test of the Invariance of c http://arxiv.org/abs/0912.3934
Gerardus ’t Hooft “Determinism and Dissipation in Quantum Gravity”, 2000. hep-th/0003005
Yoshio Koide. A fermion-boson composite model of quarks and leptons. Phys Lett. B, 120:161–165, 1983 [abstract]. http://linkinghub.elsevier.com/retrieve/pii/0370269383906445
Matti Pitkänen, The phases are divided according to primes in TGD. p-Adic Mass Calculations: New Physics. Higgs and massivation in TGD framework, oct. 2010 http://matpitka.blogspot.com/ and comments on Keas blog.
http://tgd.wippiespace.com/public_html/genememe/genememe.html
Louise Riofrio, The c Change Continues, Oct 17, 2010, http://riofriospacetime.blogspot.com
Marni D. Sheppeard, Neutral particle gravity with nonassociative braids, ViXra.org oct. 2010 1010.0029v1.pdf
M. D. Sheppeard, http://pseudomonad.blogspot.com/
Minos, http://physicsworld.com/cws/article/news/42957
SPACE.com -- Dark Energy and Dark Matter Might Not Exist, Scientists Allege
Starkman et al on extra galactic plane subtractions from the WMAP data
Michael Turner, “Dark Energy and the New Cosmology”. http://arxiv.org/abs/astro-ph/0108103
Eric Verlinde, http://en.wikipedia.org/wiki/Erik_Verlinde , http://en.wikipedia.org/wiki/Gravity_as_an_entropic_force
Richard M. Weiner “The mysteries of fermions”. http://arxiv.org/abs/0901.3816
Frank Wilczek 342_Origin_of_Mass.pdf http://frankwilczek.com/Wilczek_Easy_Pieces/342_Origin_of_Mass.pdf
Ed Witten, 2010, “A new look at the path integral of quantum mechanics”, http://arxiv.org/abs/1009.6032
E.Witten, Baryons In The 1/N Expansion. Nucl. Phys. B160, 57 (1979). On the Skyrme model.
Sidney Coleman ,Edward Witten, CHIRAL SYMMETRYB REAKDOWNIN LARGE-N CHROMODYNAMICS. Phys.Rev.Lett. 1980. http://www.slac.stanford.edu/cgi-wrap/getdoc/slac-pub-2493.pdf

7 kommentarer:

  1. http://arxiv.org/abs/1007.1750 Wun-Yi Shu 11 Jul 2010.
    Cosmological Models with No Big Bang

    In the late 1990s, observations of Type Ia supernovae led to the astounding discovery that the universe is expanding at an accelerating rate. The explanation of this anomalous acceleration has been one of the great problems in physics since that discovery. In this article we propose cosmological models that can explain the cosmic acceleration without introducing a cosmological constant into the standard Einstein field equation, negating the necessity for the existence of dark energy. There are four distinguishing features of these models: 1) the speed of light and the gravitational "constant" are not constant, but vary with the evolution of the universe, 2) time has no beginning and no end, 3) the spatial section of the universe is a 3-sphere, and 4) the universe experiences phases of both acceleration and deceleration. One of these models is selected and tested against current cosmological observations of Type Ia supernovae, and is found to fit the redshift-luminosity distance data quite well.

    Comment at http://www.galaxyzooforum.org/index.php?topic=278072.msg483063#msg483063

    they postulate that both c and G are variable over the evolution of the Universe.

    SvaraRadera
  2. cont. with comment
    It's not that G and c vary but that G/c2 is constant at all t. Now we do know that alpha and c haven't varied by more than several ppm during the last several billion years. However, the 7/8 ths of the 13.8 billion yrs of universal history puts the first 1/7th of time beyond empirical presently.

    It's dated too in addressing a dust and/or radiation scenario. Without doubt the early history is very hot and is better modelled as a radiative phase where the kinetic energies of particles vastly outweigh their particle rest mass energies. It's currently topiccal in that who wouldn't like to get rid of ie. zero dark energy or vacuum energy for that matter abd the flatnes problem to boot.

    What is relevant is the great increase in c velocity in the early phase of history. The greater c the less relativistic the universe becomes. Also, Shu gets wrapped up energy conservation so knowing full well that space stretches, and hence photon wavelengths stretch, so he keeps frequency constant and ends up having to admit that Planck's constant h must increase. As h increases the universe becomes more quantum like. For some reason he doesn't address the obvious, a possible periodicity or cyclicity of the universe that arises naturally from his three conversions of space, mass and time.

    +/-G/(+/-c)2 equals a constant at all times for a cyclic universe.

    SvaraRadera
  3. cont.
    There's no mention of what we have learnt from particle physics, matter and antimatter ( the latter that appears as matter moving backwards in time but that is antimatter with opposite charges moving forwards in time from interaction vertices, absorbing or emitting carrier particles. Particles whose antiparticles are always of identical mass, or was so from conventional data prior to Minos results).

    Since, cryptically , I know there are 6 different low mass lepton particles with non identical mass energies that respond conventionally(conventionally there are three with their antipaticles identical in mass) only via the electroweak force and gravity with no electric charge, and that we know the interaction mechanisms viz a viz muon decay and neutrino scattering and oscillation/interference we can calculate the attractive and repulsive forces for each of the 26interactions. At extremely high temperature all are in equilibrium absorbing and emitting via their carrier bosons, the W-, its anticharged W+ partner and the neutral current Z0 that carries some of the photon B0. The limit to G increase is theoretically +/-64 fold ( 62.4 and 63.4 fold from current Minos mass energy difference, the 63.4 fold is a repulsive force ). When the repulsive force was 64 fold G the c velocity is +/-8c (the - equates with neutrinos apparently moving backwards in time but are LH antineutrinos moving forward in time ; the number of tau antineutrinos was then ca. > 3.3 *109 fold greater than all other particles except its equiabundant generational partners. Sounds complicated, but it's straightforward really. By giving mass differences to the most abundant particles in the universe, which are known to exhibit chirality or parity, an all or nothing left or right even or odd, it means that when you apply energy conservation to the emitter and absorber vertices or hubs of the Feynman spacetime diagrams these verices are given a jolt , a net impulse that is positive or negative, and not zero which would be appropriate for conventional exactly equal mass charges for particles and antiparticles.

    I didn't appreciate the significance of Shu's idea is that his G/c2 would be constant at all times. I wouldn't care to say that Einstein was wrong re- general relativity; it's a new way of interpretation. There are 6 new particles , Shu's dust in essence, but 12 parity particles in the false vacuum and 6 oppositely parity particles in what we call ordinary matter ( and DM? I don't know yet). The neutrinos interactions are electroweak SU(2), and gravity and antigravity appear as a residual force of these total of 64 interactions at high temperature and energy. At low temperatures applicable to the present universe only 4 interactions are significant between the electron lepton generations and their u and d quark partners.

    SvaraRadera
  4. A lengthy discussion of the topic on
    http://www.science20.com/hammock_physicist/big_bang_big_bewilderment

    SvaraRadera
  5. Ulla, I just found this blog lately also. I have to come back here to read your long post today.

    But I do find that Koide formula interesting- only one might consider the coordinates other than (111) to the angles of the electrons... For the geometry we might expect (ttt) or (t00) where t is = 1.6180339 golden ratio. and this embedded in four space. My comment to her on the 24 cell may have been suggested by others- but I doubt before I saw it long ago. So many think this just an analog to the octahedron- not very good 4 space thinking!

    We have to get past Feynman too, and enter into the hint of a wider and freer view of things as he seems to have pointed a way.

    The PeSla

    SvaraRadera
  6. I think it is not only one 'coordinate'. It is an oscillation. Like the electron would be a vibrating string with 'ends'. The vibrations can then be described in a 3x3x3 matrix?

    http://www.2physics.com/2010/08/watching-atoms-electrons-move-in-real.html

    Golden ratio is difficult, but highly interesting. It would be the missing link. Maybe it comes from the total Universe from BB, not from the small parts of Universe?

    The twistor approach is one way to get past Feynman, but there are other ways too. It must be done in a more simple way. I have often thought of Lisis E8 presentation, and his rotations, how they might be presented in Keas braidings.

    The mass gap in the braidings is also very important.

    SvaraRadera
  7. Kea has published a summary of her research, 24 pp. It is maybe better to read directly from her.

    http://www.scribd.com/doc/40042690/Mass-Ribbon-Paper

    SvaraRadera