Temporal Curvature and Elementary Particles

Sam Micheal, Faraday Group, 21/NOV/2008

 

This theory is based on the assumption space is an elastic medium which can be distorted under extreme force. We define a new quantity Y0 ≡ ħ/2lPtP ≈ 1044 N which we call the elasticity of space. Another new quantity is the linear strain of space which we call the extension: X ≡ m/lPY0μ0ε0 = 2tPω. A related quantity is temporal curvature: C ≡ X/4π = tPν. With these new definitions, it can be shown all significant attributes of elementary particles are interrelated: energy in mass is energy in extension which is the same energy in temporal curvature which is spinning charge. The two qualities of space, elasticity and impedance, relate the significant attributes of elementary particles.

 

Time dilation aboard a speedy craft is an accepted fact. Time dilation near strong gravity sources is also an accepted fact. For the moment, let’s ignore spatial curvature near those. Let’s focus only on temporal curvature. Time slows down the most at the maximum of curvature. This could be the center of a planet, star, or neutron star. In a circular orbit, temporal curvature is constant. In a plunging orbit, temporal curvature goes from some fixed level to maximum then back to the fixed level (depending on starting position). Analysis in gravitation is about trajectories or geometry. The two trajectories listed above are orthogonal in that any trajectory can be made from a linear combination of both. This is essentially a proof that gravitation can be analyzed exclusively in the context of temporal curvature.

 

In much the same way, the mass component of elementary particles can be treated as a manifestation of temporal curvature. Energy in mass can be viewed as energy in temporal curvature. This is especially convenient when we consider relativistic effects: relativistic energy is simply an enhancement of rest energy (in temporal curvature).

 

Elementary particles have three components of energy: two that are non-relativistic and one, mentioned above, which is a relativistic quantity. The non-relativistic components are spin and electric flux. (The facts that two of three are non-relativistic quantities, their measured levels, and the ten stable elementary particles – are not debated here. I believe a full understanding of temporal curvature and appreciation of impedance will illuminate all these facts.)

 

Some years ago, I discovered a relationship between charge and spin that has been ignored and dismissed:

   ħ ≈ Z0e2   where Z0 is the impedance of space.

Spin is impeded charge (moment). There is a kind of equivalence between spin and charge (moment). If we ignore the numerical approximation, total energy can be expressed as:

   ET = E0/γ + E0/2π + E0/4π   where γ = √(1-(v/c)2)

and where the first term is energy in temporal curvature, second – energy in electric flux, and third – energy in spin.

 

Energy Distribution at Various Speeds

v = 0

v = .25c

v = .5c

v = .75c

v = .99c

 

The next serious question is about the confinement mechanism – what keeps these “bubbles in space-time” from simply dissipating? What holds them together? I propose a balancing of forces: the extreme inelasticity of space with an incredible internal temporal pressure wave. The elasticity of space can be calculated with a couple assumptions: Y0 = ħ/2lPtP ≈ 6.0526 *1043 N. If elementary particles are Planck-sized objects, they must have internal pressure that balances that extreme force. I propose a spherical standing wave of temporal curvature – much like an onion in terms of structure. The rest energy of elementary particles is small but pack that energy into a very small space and you have a good candidate for the confinement mechanism. Again, the issue here is not the why of ten elementary particles. I believe that why can be answered when we fully understand temporal curvature and appreciate the impedance of space.

 

The only extended component of elementary particle energy is electric flux. The other components are confined to the Planck-sphere.* This could explain the double-slit phenomenon of self-interference. The electric flux of elementary particles is not unlike a soliton – a solitary standing wave of electric energy. It is not unreasonable to propose this is the mechanism of self-interference. This idea could be tested in simulation and verified with real particle beams of various configurations. *Of course, there must be “residual” extensions of spin and gravitational energy – otherwise, spin and gravitational interactions (between elementary particles) would not be present. (As I understand it, spin is manifested via magnetic moment which is a result of spinning charge. Gravitation must be an extension of temporal curvature beyond the Planck-sphere. The proportion of extended energy must be dependent on number and amplitude of waves inside.) ..An idea I discarded around twelve years ago was the following. Temporal curvature acts as an energy reservoir for oscillating flux and spin. This idea was developed to account for tunneling behavior. Preliminary calculations were not encouraging (energy in electric flux must be increased to compensate for “sinusoidal deficit” – in order to maintain Bohr dimensions.) Perhaps tunneling can be explained in another semi-classical way or perhaps there is indeed some oscillation of electric flux and spin. Further work is required.

 

This theory has been developing for about twenty-five years – very slowly at first for three reasons: difficulty in visualization, ironing out seeming inconsistencies, and my reluctance to employ Planck-size. Visualizing standing waves of temporal curvature is not easy. There were apparent inconsistencies in the relativistic domain at first, but these disappear with proper definitions (ν ≡ 1/Tγ2). Around twenty-five years ago, it was suggested to me to employ Planck-size but the fact theory becomes unverifiable when you do that – impelled me to pursue other avenues at first (Compton dimensions). The theory “took off” when I took a course in electromagnetism around fifteen years ago. This is when I discovered the relationship between spin and charge. And only very recently did I give up on Compton dimensions in preference for Planck-size. It took over twenty years to precisely define elasticity – in part – because of my reluctance to employ Planck dimensions.

 

Once we arrive at a suitable model of elementary particles – one with appropriate arrangement of spin and flux, creating nuclei, atoms, and molecules (as in simulations) – will become child’s play.

 

The purpose of this perspective is to present a plausible and elegant picture of elementary particles – that they are stable vibrations in space-time. From this perspective, it can be shown the origin of uncertainty is not a probability density function – but the vibratory nature of elementary particles themselves. Energy-uncertainty can be shown to be bounded by a linear function of position-uncertainty – alone. This contrasts the conventional perspective which asserts energy and time uncertainty are complementary and interdependent random variables – decreasing one increases the other and vice versa.

 

No theory is any good – unless it is testable – and a decisive test is proposed – to compare against convention and this more elegant perspective. It is proposed elementary particles are “mini dynamical systems” that are disturbable – and that those disturbances are measurable.

 

For a more thorough discussion and development of these ideas – please download a copy of my latest book: Gravitation and Elementary Particles.

 

Addendum 1:

“The Universe in Fourteen Lines” ;)

 

E ≡ Y0lPX ≡ Y0ctP4πC ≡ mc2 ≡ hν ≡ h/Tγ2 ≡ hC/tP ≡ ħω ≈ Z0e2ω

 

ΔEΔt ≥ ħ/2; ΔpΔx ≥ ħ/2; ΔXΔt ≥ tP; ΔE > -c1Δx + c2; ΔX > -c3Δx + c4

 

I was told years ago that “It’s useless to stare at equations for hours at a time.”, but insights can be garnered by constructing lists of identities such as above – “proving” things that perhaps were only suspected before. Reading above in English: energy is (the force in) the elasticity of space through Planck-length causing an extension – which is – that same force through Planck-time causing temporal curvature – which is – mass times the speed of light squared – which is – Planck-energy times frequency – which is – Planck-energy divided by period – which is – Planck-energy times temporal curvature divided by Planck-time – which is – the fundamental unit of angular momentum times angular frequency – which is approximately equal to the impedance of space times charge-moment times angular frequency. c in line three is a scaling factor to keep units correct (c is the speed of light). Gamma in line six is a relativistic scaling factor. E, X, C, m, ν, T, and ω are all relativistic quantities. Three fundamental identities were garnered in the process of constructing above – insights that I suspected but could not easily prove:

 mass is energy stored in temporal curvature – Y0(4πtP/c)C ≡ m,

 energy through time is energy in curvature – EtP ≡ hC,

 energy through time is spin causing extension – EtP ≡ (ħ/2)X,

 and there is a kind of equivalence between the elasticity of space and the impedance of space (a relation I’ve been looking for – a long time) – Y0lPX ≈ Z0e2ω.

 

Strictly speaking, force through time causes temporal curvature – which is mass. Energy through time is energy in curvature. Energy through time is also spin-moment causing spatial extension. The final relation deserves special explanation. It shows there’s a correspondence between three sets of analogous quantities. Elasticity is to length as impedance is to charge-moment; length is to extension as charge-moment is to angular frequency; elasticity is to extension as impedance is to angular frequency. Extended space is spinning charge. The relation shows how equally important elasticity and impedance are. ..Some years ago, I abandoned an oscillatory model of elementary particles – where energy in charge-spin oscillated with energy in spatial-temporal curvature – I could not prove it (editors objected: mere speculation). So I attempted to cut my assumptions to minimum – cutting away parts of the model that were not absolutely essential. The current model is plausible and feasible. The more I investigate it, the more it seems to make sense. We just need to work on modeling flux and spin (such as proposed by Bergman).

 

Let’s rewrite above – just keeping the absolute essentials:

 m/(μ0ε0) ≡ E ≡ (h/tP)C ≈ Z0e2ω

                          

              Y0lPX ≡ ((ħ/2)/tP)X

where μ0 is the permeability of space, ε0 is the permittivity of space, and Z0 ≡ √(μ00) ≈ 377 Ω.

 

 m ≡ (h/tPc2)C

 

 Y0lPX ≈ Z0e2ω

 

Energy in mass;

is: elastic force through distance causing extension;

is: energy over time causing temporal curvature;

is: spin energy over time causing extension;

is: spinning charge.

Curved space-time is mass is spinning charge; it’s all the same energy – just different manifestations of it. Line two: mass is energy over time causing temporal curvature; mass is temporal curvature. Line three: there is a kind of equivalence between the elasticity and impedance of space.

 

Addendum 2:

A Note About Approximation

 

Many will dismiss this theory for the simple reason I use an approximation above between spin and charge energy. ..After some contemplation, we could think of the difference (ratio) between charge and mass energy (.091701) as lag in phase (phase difference) between them. If we represent energy in mass as cos2θ, the phase lag for charge energy is -1.26314. Since mass is a standing wave of temporal curvature, we cannot detect this phase lag directly – we can only calculate it. This seems better than summoning a cloud of virtual particles to explain charge deficit. Of course, the why of charge energy phase lag still needs to be explained. ..Yet another way of looking at charge deficit is with vectors (we assume a specific geometry with this perspective): two electric vectors with equal magnitude of √Z0e lay in x-y plane. Their cross product is a vector in the z-direction with magnitude Z0e2sinθ where θ is the angle between electric vectors. Since sinθ = .091701, θ = .09183 = 5.26149° (the angle is not unique: π-.09183 also works). Again, if we adopt this approach, we need to explain why. Finally, a third approach to explaining the factor 10.905 is to propose a different spin rate for electric flux: if we let ωe = 10.905ωm, ħωm = Z0e2ωe. As with the others, if we adopt this approach, we must explain why it’s preferable. I prefer the simplest approach which requires the least number of assumptions – one that jives with reality. For example, if the final approach does not agree with measured magnetic moment, we must throw it out.

 

Addendum 3:

A Tentative Complete Model

 

Based on the third assumption above and its qualifications, let’s tentatively assume it’s correct and complete the model:

 m   = ħωm = (ħ/2)X ≡ Y0lPX = Z0e2ωe

μ0ε0                  tP

                                 ωe ≡ 10.905ωm

                        hC       X ≡ Δl =     m     = 2tPωm

                        tP                  l     lPY0μ0ε0

Elementary particles are dual-sized structures with corresponding dual-spin. Space-time curvature is largely confined to a Planck-sphere whereas electric flux resides largely within Compton dimensions. Inner spin is ħ/2 with rate ωm; outer spin is Z0e2 with rate ωe. The link between them is the elasticity/impedance of space (Y0/Z0 = 1.60661*1041 AC/m).

 

Addendum 4:

Inspired by RL Oldershaw at http://home.pacbell.net/skeptica/thenewphysics.html

 

List of assumptions for EDST (elastic deformations in space-time) model of elementary particles (not necessarily ranked in order of importance):

1.   the cores of e.p.s are Planck dimensions constrained objects

2.   the cores are comprised of spherical standing waves of temporal curvature

3.   internal energy density is balanced with external pressure; external pressure is caused by the extreme inelasticity of space, Y0 ≈ 1044 N / 1022 N

4.   e.p.s are dual structures: twisted cores of temporal curvature coupled with Compton-sized spinning electric flux rings

5.   the distributed nature of the flux rings causes self-interference phenomena

6.   the geometry above and the two qualities of space-time, Y0 and Z0, are minimally sufficient to describe e.p.s and their interactions

7.   the strong force and gravitation are essentially the same thing – caused by residual extension of curvature beyond the core

8.   geometry explains instability such as with 8Be

The purpose of developing EDST is two-fold: to extricate/excavate physics from its self-made prison/tomb consisting of an agglomeration of arcane math, untestable concepts, illucid ideas, and a general avoidance of the scientific principle: propose, test, revise / start over – and – provide a view of nature that is consistent, elegant, and verifiable.

 

The elegant nature of the model is exemplified by these two revelations: an explanation of inertia and view of matter. Inertia is simply the lack of relativistic energy to add or take away from a core at rest. View of matter: there are only two things in our universe: space-time and energy. Life is a functional arrangement of these two things.

 

Immediate problems with the model: the dual-structure has dual-spin: ωe = 10.905ωm. How? Why? Is it a result of how we measure spin? The core equations of the theory were derived using the concepts of linear elasticity and the ‘ideal stretched string’. The value for Y0 above has two values because of that and assumption 3 above. Derive Y0 based on the former, you get the first value. Derive Y0 based on point 3, the second. Consequences of this are: extension/strain increases drastically from mere fractions to 1/6 – and – electrons and protons have different radii (as opposed to the former model which asserts both have Planck diameter).

 

Decisive tests: a decisive test was designed about the corollary premise that e.p.s are mini-dynamical systems which are disturbable. It is possible convention could dismiss this test with the path-integral approach to QM. But since there are eight assumptions above, there should be many decisive tests which can be designed. Dear reader, please help.

 

Addendum 5:

Three Tests of the Theory

 

I’ve been attacked recently about making temporal curvature theory “too convenient”. In fact, I did not design the core equation stating the relationship between impedance and elasticity. I did not design the explanation of inertia. I did not design the simple relationship between spin and linear strain / extension. I discovered these after making an assumption about elasticity. If we let c=1 which is a typical convention in texts on elementary particles, we see lP=tP and E=m (of course units are conserved). What’s more startling and insightful is this: X/lP = E/(ħ/2). Extension is to Planck-length as energy is to spin. The “information” of each is “encoded” in the other. It’s like saying you’ll know how far the golf-ball will go based on the club you use – exactly how far. That’s amazing. Somehow, the exact relationship between extension and Planck-length – and – energy and spin is encoded in the “fabric of space”. But we arrived at these discoveries based on the linear definition of elasticity. So it is our choice of elasticity which defines the rest. Let’s look at it again: lP/X = (ħ/2)/E. A Planck-length of space is stretched by X – by exactly the same amount – as the proportion of spin-energy to total-energy. It boggles my mind – the simplicity – and I struggle to grasp the “consequences” or meaning of it. As I try to visualize it, spin-moment pushes against space – stretching it. But how it pushes or what it pushes against is “beyond me”. My only rational explanation is elastic-impeding space-time. The “bubble” of temporal curvature pushes against the extreme inelasticity of space. Admittedly, it’s difficult to visualize – but that does not make it wrong.

 

I used to be clever. I say “used to be” because over five years ago, I designed two tests of the theory before it was fully developed. I’ll include those here plus the one I published in N and Ω.

The Inertia Test

            For about the last eight years I have been working on a conceptual unification of gravity, special relativity, electromagnetism, and elementary particles. Among the many discarded ideas - a few have endured - competing within my mind - to explain the universe around us: the centrality of the impedance of space, the elusive elasticity of spacetime, the true nature of energy propagation, and the twist-and-fold of elementary particles. For those, the longer I examine them - the more I see them as waves - and less like actual particles .. Just as a measured value is only as good as its error - a theory is only as good as the number of relevant testable hypotheses it produces; how hard I have tried to lure the muse of scientific inspiration to my side .. Only recently, have I succeeded. Those that have studied special relativistic effects - and of those who’ve studied gravity, should have found it hard not to notice some parallels: time dilation and Lorentz contraction. My explanation of those and how they relate to the effects of strong gravity tie back to the true nature of energy propagation, but this section is not about that; it is about producing a relevant testable prediction of the theory.

            Among relativistic effects, the parameter I failed to mention was mass. And this - is precisely the parameter I propose to test. Imagine twirling a bowling ball on a hard smooth surface: initially, it requires a torque to accelerate it to a particular angular speed; T=ma (neglecting friction). The mass is called - inertial mass. Whether you’re in space or on the moon - angular acceleration requires torque. Now, before you twirl it this time - paint a stripe down the side. Twirl it and time when you see the stripe. Keep track of every time you see the stripe. Move the experiment into space. Nothing changes (strictly speaking, time has speeded up for you and the ball - and for a distant observer in “flat” spacetime - they see exactly that - a quickening of your experiment). But for you - nothing has changed .. or has it?

            I propose something has - and that something - is inertial mass. I propose that inertial mass increases near a strong gravity source in the same proportion as time and height reduce. (Strictly speaking, spacetime is curved - objects are invariant, but here is not to argue that.) The bowling ball should spin measurably faster away from Earth’s gravity well - in order to have consistency between relativistic effects and those of strong gravity. The way to test this numerically would be to apply a specific torque to an object - on Earth’s surface - measure the resultant speed of rotation, apply the same torque to the same object as far away from strong gravity as possible - measure rotational speed again, and compare results.

            After some calculation, three critical factors emerge and one number: the uncertainty in applied torque, the uncertainty in ship speed, the uncertainty in frictional effects, and 1.000 000 000 5. The number represents my estimate of the ratio of accelerations (deep space / earthly) resulting from [my predicted] change in inertial mass:

a2/a1 ~ = (1.000 000 000 5)b

where a2 is the angular acceleration of a mass in deep space due to an applied torque, a1 is for the same mass and torque on earth, and b is the relativistic factor associated with the mass in deep space on a ship at a certain speed with respect to the first test point and time (this effectively makes the first test point on earth the experimental rest frame). I assume the energy density of deep space is zero and arrive at that particular number based on an analogy between relativistic effects and energy density effects. The uncertainties must add up to less than a factor of 10-10 in order to be able to detect the above predicted change in inertial mass. [Careful examination of the effect of gravity on the reference clock and bowling ball rotation (a kind of clock itself), compels me to proclaim there is no confounding temporal effect on the test system, but an astute reader may provide sound evidence to the contrary. I welcome intelligent assistance.]


The Flywheel Test

Setup: 100 kg flywheel, operating speed 100000 rpm, circumference 1 m

[I have been assured by engineers, this is a feasible experiment if proper materials are employed.]

Implications: 100000 m/min, 1667 m/s, ~10-6 g or 1 μg mass enhancement

Requirements: scale accuracy weighing flywheel assembly unimportant, precision must be better than 1 μg, and magnetic interactions must be accounted for. The above calculation was based on the low speed approximation for KE = .5mv2; that was further halved to account for only half the equatorial enhancement pointing toward the earth. Then I realized a confounding factor: the accepted relativistic mass of a rotating object: using KE/c2 = m0-1 - 1) implies an enhancement in mass of about 1.5 μg, but according to accepted theory, this should not be oriented with the equator - convention predicts an enhancement regardless of orientation. So another run must be made. Three times the mass system should be weighed: at rest, rotating at 100000 rpm on its side, and rotating at 100000 rpm vertical. The results should prove conclusively which theory is correct: convention says ~100000.0000015 g regardless of orientation (in other words, a vertical run should prove/disprove that), my theory says ~100000.000001 g on its side (but no enhancement vertically), and if there is no enhancement at all - something wrong with our setup or calculations.

            Just to be clear about my predictions, so there is no ambiguity:

                        Side                Vertical           Rest

Convention:    X                     X                     none

DQM:             X                     none                none                Key: X = mass enhancement

If there is a vertical enhancement and extra side enhancement, then the evidence would lean toward DQM - only some rethinking would have to be done. If the results produced equal enhancement on side and vertical, that would pretty much lay this test to rest.


 

Chapter Three – A Decisive Test

 

In the process of writing, I have changed the chapter ordering because of the importance of this concept. Science without tests is fantasy. The following test is not a test of a core equation, but it tests a corollary premise that e.p.s are mini-dynamical systems which are disturbable – and that these disturbances are measurable.

 

If two particles are identical in: identity (two electrons for example), velocity, and position – they are identical. (This is the conventional perspective – ignoring polarization.) They are indistinguishable. It doesn’t matter how they got there; they behave the same from there on. Regardless of how they arrived, if you later measure some attribute, that value should be the same with the same level of error/uncertainty. Unless..

 

Unless particles are dynamical systems with a kind of ‘memory’ for past disturbances. Imagine two electrons arriving at the same place with the exact same momentum (at different times of course) but just after a huge difference in disturbance. If one arrived just after a small disturbance and the other arrived just after a much larger disturbance, there should be a larger uncertainty associated with the latter – if elementary particles have ‘memory’. If elementary particles are dynamical systems, they should exhibit larger uncertainties after larger past disturbances. This is the essence of the test.

 

The setting is somewhat like the inside of a TV tube: it’s evacuated with electron gun at one end and target at the other. The EG is adjustable in intensity (number of electrons emitted per unit time). The target, T, is a thin gold foil leaf which bends easily under electron impact. The following is a baseline setup:

EG----------------------T

The EG is run at various intensities to measure deflection of T. Perhaps a laser bounced off T could give better resolution. In any case, we’re attempting to measure uncertainty in electron momentum – which is the variation in deflection of T. Theoretically,

   ∆p = ∆(mv) = 2(m∆v + v∆m) ≈ 2m∆v   (1)

since ∆m should be negligible. Once calculated, this can be compared to the measured uncertainty.

 

The next setup is called “small disturbance” and introduces three magnetic deflectors which disturb the beam by pure reflection: a small magnetic force from MD1 (magnetic deflector 1) deflects the beam off-target, MD2 over-corrects, and MD3 re-places the beam axially:

                   MD2

EG-----MD1     MD3-T

 

The final setup is called “large disturbance” and introduces a larger deflection by using stronger magnets (or more powerful electro-magnets):

                   MD2

                      /\

                    /    \

EG-----MD1     MD3-T

 

Entire path length – from EG to T is the same – in setups two and three. This is to minimize the ‘number of changed variables’ between the two. That means the relative sizes of the diagrams above is deceptive: the physical separation between MD1 and MD3 is actually larger in setup two.

 

Applying Newton’s second law and the relationship between speed and acceleration (speed is the integral of acceleration), we find uncertainty in momentum is directly related to uncertainty in force:

   ∆p ≈ 2∆Ft   (2)

where F is the force imparted from MD3, t is the ‘interaction time’ of an electron with MD3, and uncertainty in time is negligible. Note that the force here induces an angular acceleration (a turn) – not a linear acceleration – axial with the beam. The only confounding factor is t, interaction time with MD3: in the “small disturbance” setup – that time should be smaller than in the “large disturbance” setup because there is less magnetic flux over the same volume (the path of the electron crosses less magnetic flux). So that factor will have to be accounted for in (2).

 

We are trying to calculate an expected uncertainty in deflection of T as compared to the baseline. Those following convention are free to employ the path-integral formulation devised by Feynman and compare with above. What ever you do, examine your assumptions: if path-integral requires you to account for uncertainty in forces and interaction times for all three magnets, then Feynman is assuming elementary particles are dynamical systems with random state variables. If that’s true, then convention and determinism differ by only one fundamental assumption: random state variables vs internal oscillation.

 

There are benefits that ‘go with’ determinism which convention conveniently ignores: the qualities of space-time constrain elementary particles – these are natural and ‘flow’ from the properties of space-time – as compared to convention’s attempt with 11 dimensions and string theory (their dogged adherence to reduction and probability becomes ludicrous and laughable). The other benefit of determinism is that it makes sense. Why appeal to probability when we have the systems approach? Why automatically assign the label “random wave” to elementary particles – based on appearance, ego, and historical revulsion of determinism? It boggles my mind – the intransigence of convention. I’ve realized “a marriage” is not the proper analogy of convention and probability-reduction. The proper analogy is a baby clinging to their mother’s breast – desperate for milk. The conventional adherence to probability-reduction is infantile.

 

Sam Micheal, 21/NOV/2008

micheal at msu dot edu