By Engineer Saviour - Blaze Labs
This
section is a compilation of personal ideas which to my knowledge have not been
tackled by conventional physics teaching, and in some cases contradict
conventional physics. Sadly enough, once a certain way of viewing things has
won over the opposing theories, a paradigm is cemented and enshrined as
something untouchable, even if experiments seem to invalidate its very
foundations. It is unfortunate that most books and colleges suggest that the
existing explanations of science are final and that after reading the whole
book or finishing his course of study, the student should go away satisfied
with his wisdom. In reality, both foundations and frontiers of science are
still very unclear even to the best scientists around. It is often easy to find
out 'what' happens, but much more difficult to understand 'why' it happens. In
fact if one would go into such a detail, he won't find any answers.
The rules of electricity, conservation of momentum and conservation of energy
have been with us for over two centuries. During this time, most scientists
have forgotten or have not even realised that we have no idea of where these
rules come from and no concept of what causes them, but simply accepted as the
thruth. Few are the scientists that are currrently searching for such answers,
and this is amazing. Answering these challenging questions is potentially the
most incredible step science will ever make.
It is not something new, that, as decades pass, science had to be revised from
time to time, correcting concepts and introducing new theories. This upgrading
process is however not usually done by just changing a constant, or a parameter
within a few equations, but whole concepts have to be re worked, books re written,
and old theories abandoned. The more time passes between each of these general
science upgrades, the more difficult it becomes for the whole scientific
community, including the students themselves to accept the idea of accepting a
better unified science. If one studies the present day science sitiuation, he
can easily notice how fractured and patched, our current scientific knowledge
has grown into. This is a sign that time has came for one of those general
upgrade process.
We will here investigate some of the 'wrong turns' taken by our teachers, with
emphasis to the 'hard particle' paradigm. This paradigm alone, which
unfortunately has been carried over for many generations is the culprit for
many wrong concepts in conventional science, that lead to the excessive amount
of 'constants' and 'blind assumptions' together with the acceptance of
inevitable paradoxes. Until one grasps the correct concept of what is matter
made of, he cannot say that he understands gravity, energy or momentum, since
all these things are inter related. Once science accepts the concept introduced
in The Particle section, the present day assumptions and unknown constants will
be reduced to the properties of space, which are few and simple. Also, science
has to be built upon a unified theory with zero paradoxes, since one paradox is
enough to show that something basic is very wrong. Here we also go further to
investigate very interesting topics as gravity, the so called particle-wave
duality, sonoluminescence, platonic solids, nucleus fractal, sacred geometry,
and higher dimensional space.
Related books, brought
to you
|
Feynman Lectures on Physics Volume 1 - Mainly mechanics, radiation & heat |
|
Feynman Lectures on Physics Volume 2 - Mainly Electromagnetism & matter |
|
Feynman Lectures on Physics Volume 3 - Mainly Quantum mechanics |
|
Heaven's Mirror- Quest for the lost civilization : by Graham Hancock, Santha Faiia |
| |
| |
| |
|
Exploring the Physics of the Unknown Universe : by Milo Wolff |
| |
| |
| |
|
By Engineer Saviour - Blaze Labs
Introduction to the Unified Theory
The Unified Field Theory is sometimes called the Theory of
Everything (TOE, for short): the long-sought means of tying together all known
phenomena to explain the nature and behaviour of all matter and energy in
existence. The advantage of a unified theory over many fragmented theories is
that a unified theory often offers a more elegant explanation of data , and may
point towards future areas of study as well as predict natures laws.
In physics, a field refers to an area under the influence of some force, such
as gravity or electricity, for example. A unified field theory would reconcile
seemingly incompatible aspects of various field theories, to create a single
comprehensive set of equations. Such a theory could potentially unlock all the
secrets of nature and make a myriad of wonders possible, including such
benefits as time travel and an inexhaustible source of clean energy, among many
others.
For example, in 1861-65 James Maxwell Clerk explained the interrelation of
electric and magnetic fields in his unified theory of electromagnetism. Then,
in 1881-84 Hertz demonstrated that radio waves and light were both
electromagnetic waves, as predicted by Maxwell's theory. Early in the 20th
century, Albert Einstein's general theory of relativity - dealing with
gravitation - became the second field theory. The term unified field theory was
coined by Einstein, who was attempting to prove that electromagnetism and
gravity were different manifestations of a single fundamental field.
Regretably, Einstein failed in this ultimate goal. When quantum theory entered
the picture, the puzzle became more complex. The theory of relativity explains
the nature and behavior of all phenomena on the macroscopic level (things that
are visible to the naked eye); quantum theory explains the nature and behavior
of all phenomena on the microscopic (atomic and subatomic) level. Perplexingly,
however, the two theories are incompatible. Unconvinced that nature would
prescribe totally different modes of behavior for phenomena that were simply
scaled differently, Einstein sought a theory that would reconcile the two
apparently irreconcilable theories that form the basis of modern physics.
|
|
|
|
Although electromagnetism and the strong and weak nuclear
forces have long been explained by a single theory, known as the standard
model, gravitation does not fit into the equation. The current quest for a
unified field theory (sometimes called the holy grail of physicists) is largely
focused on the superstring theory and, in particular, on an adaptation known as
M-theory.
This theory aims at providing an explanation for all known forces and physical
effects using the same language, and showing that everything is made up of the
same elementary entity. Physicists hope that a Grand Unified Theory will unify
the strong, weak, and electromagnetic interactions. There have been several
proposed Unified Theories, but we need data to pick which, if any, of these
theories describes nature. All the interactions we observe are all different
aspects of the same, unified interaction. However, how can this be the case if
strong and weak and electromagnetic interactions are so different in strength
and effect? Strangely enough, current data and theory suggest that these varied
forces merge into one force when the particles being affected are at a high
enough energy.
There are many things not yet properly explained by conventional physics. Forces like magnetism, static electricity and gravity need to be inter-related to a high degree, and their activities must be better understood. Also, what I call the hard particle paradigm has made such achievement much more difficult than it really is.
The Elementary Entity
Dielectric element
Any point in space may be completely defined by its position at a particular time. Any macro entity may be considered to be made up of a very large number of smaller entities, which in turn are made up of smaller entities. This kind of entity nesting does not go to infinity, but a lower limit exist, where the entities are of the smallest possibe size, equivalent to Planck length, and we will refer to these as elementary entities. Each entity is unique, and does not need to include any 'particle properties', that is, even vacuum is composed of elementary entities. What makes one macro entity different from another, is its structural shape. The basic requirements for this entity to exist are space and time.
Space and time are themselves an electrically oscillating
circuit in space, and fill our universe, including what we refer to as vacuum.
These can handle different, but discrete, values of energy, frequency and
polarisation values. Permittivity e
defines the elasticity of the elements whilst permeability m
defines their inertial property. Later on it will be mathematically shown that
all these parameters boil down to different relations between space & time.
Macro elements (such as electrons, atoms, and particles of 'matter') can be
accomodated in such elements by filling up the above parameters with those of
the macro structure. Note that a mass is not tangible, and if one has to
disassemble a mass he will end up with these elementary entities which are all
but electrical entities.
The hunt of scientists for the smallest existing atomic particle is a lost
battle, because there is no such thing. Increasing the power of zooming in
matter will just show that dielectric elements can be infinetely small, and are
in fact electrical in nature. The fact that we can 'feel' most 'massive' macro
entities, such as a large number of atoms, is only a reaction of forces
(electrically accounted for) between our own bodies' entity structure and the
object structure. We have to accept the fact that matter is just a complex
structure of electromagnetic elementary entities, even the electron itself.
Origins of Mass, Motion, and Inertia
So what makes an observable 'material' e 939b19j ntity different
from empty space? The answer to this question was given in the theory of
mechanical waves pronounced in 1923 by the Nobel Prize winner Prince de
Broglie. According to this theory material particles are always linked with a
'system' of travelling waves, a 'wave-packet' or standing wave, forming the
constituent parts of matter and determining its movements. A priory assumption
is that space is filled with travelling waves. In general these waves
neutralize one another, but at certain points it happens that a great number of
waves are in such a position, or structure as to reinforce one another and form
a marked observable wave crest. This wave crest then corresponds to a material
particle! So, the answer to our original question, is that a material entity is
a structure made up of standing waves, whose origin are the travelling waves
that make up the empty space.
The animation shown here is of a water surface in a closed vibrating tray, made to vibrate to different modes by simply varying the frequency. One can easiely see how a 'system of travelling waves' or better, standing waves, generate what most would call a particle, whilst in fact its just a 3D formation of standing waves. Since, however, the waves may travel in different directions they will part from one another, and the wave crest disappears to re-appear again at a nearby point. The material particle has moved, or better teleported. The same mechanism applies to electromagnetic standing waves which constitute all matter. The wave crest will thus travel in quantum steps, but the velocity with which this is done is quite different from the one with which the underlying wave systems move, that is light speed. The material particle in general moves at right angles to the surfaces of these mechanical waves, just as a ray of light is, as a rule, directed at right angles to the surface planes of the light waves. First we have to accept the fact that every existing location in space is made up of descrete electromagnetic elementary entities. Since this element is filling up a three dimensional space (volume), it has to have its own dimensions in space, with a shape or structure which leaves no discontinuities (or space time void) when neighbouring cells are surrounding it to form a bigger cell. Space time is the product of the volume taken by such element and the time for the element to go through one oscillation. To comprehend this, we must slightly change the concept of what is mass, what is a particle, thus reducing matter and vacuum to the same definition- waves. The actual structural shape is not important at this point, however it should be one which promotes cascading of similar shapes to form bigger macro oscillating structures.
Moving an object of different net electrical properties than its surroundings, means that an external force has to be applied in some way to this object in order to reconfigure the parameters of all the electromagnetic elements in front of that object to the same properties of the moving object, and at the same instant reconfigure all the elements behind the object to that of the surrounding elements. 'In front' and 'behind' being relative to the direction of motion.
From a visual point of view, what we call 'matter' is continously reconstructed in space, whether or not it has a static or dynamic position in space. Simply moving an object by just one millimetre, would mean integrating a huge but discrete number of such a process. That is, motion is not a continous process but rather a huge number of digital processes, crests of standing waves disappearing and reappearing elsewhere in space. Nothing actually moves, motion becomes an interpretation of different shapes at different times. The same applies in the spatial time dimension for stationary objects, since they are 'moving' in the time dimension. In this example, the arrow is composed of the green dielectric, and the surrounding white grid could be vacuum. Each grid size is equal to the size of one planck length, the size of the basic electromagnetic entity, so nothing can exist in between, not even space or time.
The arrow, which could be a mass or particle, is thus seen
to be moving, but looking closer, one can see that this is just an optical
illusion, the same illusion that makes us believe that an object moves.
Reconstruction itself does not need external energy to be applied, because the
energy required in front of the moving object is balanced out by that released
at the rear. But if we need to change the rate of reconstruction (change in
velocity or direction), then external energy would be required to create in
imbalance between the front and rear electrical parameters.
This explains the fact that a stationary object needs an external energy source
to start motion in space. It also explains inertia. Once a body is moving, it
is reluctant to slow down unless external energy is applied (example friction).
This is because once the external energy has been used to modify all parameters
of EM elements within the object, the object will continue to reconstruct in
any location in space with those parameters, until another external energy is
applied. This is why a stationary or static object remains still, and a moving
object remains moving. It also explains why when a rotating object is no longer
restricted in moving round by its centripetal force, it continues to move in a
straight line tangent to the point it left on its circular path - it just keeps
on the last reconstruction parameters. Note that the above motion properties
work linearly only in a uniform time-space volume, that is while the object is
travelling through a uniform EM field, similar to the grid shown in the arrow
model above.
Gravity explained
It is frightening to think that we are ourselves being reconstructed all the time by dielectric elements, and you may ask, what about if I am not reconstructed during the next second? Or, what if a mistake happens during reconstruction? The answer is that unless there is a space-time void or discontinuity, reconstruction is a perfect process done with no energy input expense. Input energy may however effect reconstruction, for example in a growing living cell, or an accelerating field. Continous space-time volume can also be considered to be elastic, and if a dielectric volume has a non-linear shape, it will need external energy to be applied to a moving object within it, or on the opposite, release energy during reconstruction. This means that there would be an imbalance of forces between the electromagnetic elements in front and those behind the object. In such cases, objects at rest may not remain at rest, and moving objects may not continue in their straight path & constant velocity. The force we call gravity is one such case. In such a field, the energy needed to reconstruct an object in the higher flux density direction is less than that required to reconstruct the surrounding dielectric behind the object. Thus an object will move with no external energy applied towards the higher flux density.
To visualise the effect of non-linear electromagnetic element volume (space-time) at a centre of gravity, imagine the surface of a rubber sheet with a uniform grid drawn on it, and visualise the grid when the rubber is pulled down at a point below its surface. Such bending of space-time is a result of this non-linearity of the parameters present in the dielectric volume. One method of generating a non-linear dielectric volume is to expose the whole dielectric volume under concern to a non -linear electric field, with the 'centre of gravity' being the centre of highest electric field flux density.
An example of this is our planet, which has a non-linear electric field gradient with its highest gradient near the surface. Linear gravity does not exist, gravitational force is always non-linear (an-isotropic) pointing towards its centre. That is earth's g=9.8 at ground level, but decreases at higher altitudes. Linear gravity results in a linear space-time and is the same as zero gravity. Similarly, an electromagnetic element exposed to a linear force field will reconstruct the objects in it at zero energy transfer. However, when exposed to a non-linear force field, an object moving within it will experience a force imbalance in the direction of the highest force flux density. So the attraction of matter to centres of gravity is not a result of matter itself, but of the spacetime 'stretching' and 'compression' infront and behind the moving object. A massless dielectric, that is space itself, would still be 'accelerated' towards the point of easier reconstruction. The mass movement is just an indication of movement of its electromagnetic constituents.
The dual nature of light and matter
Four thousand years ago, Democritis created the point
particle of mass to represent the fundamental elements of tangible matter. This
concept was satisfactory until about 1900, when quantum properties of matter
were found. Then, puzzles, problems, and paradoxes appeared, because most
properties of matter derive from the wave structure of particles. Democritis
couldn't know this, and until recently few persons challenged this concept, now
embedded as a paradigm into mainstream science. Nevertheless, Schroedinger,
deBroglie, Dirac, and Einstein, the founders of the quantum theory, preferred a
wave structure of matter, and in the last decades researchers have
experimentally validated their intuition. Unfortunately however, mainstream
science is still stuck within the 'hard particle' paradigm and prefers to use
the term 'wave particle duality', instead of clearly establishing that 'hard
particles' are nothing but an old scientific misinterpretation, due to our
sense of touch. Once the wave structure of matter is accepted, most physics as
we know it will automatically collapse, and that will be the start of a
completely new unified science.
The notion of wave-particle duality, states that an electron, for example, may
sometimes act like a wave and sometimes like a particle. Conventional physics
explains that when electrons are excited, packets of quanta are released as
electromagnetic radiation and then 'hit' matter, the same way a ball hits a
wall, resulting in kinetic and heat energy at the target. It also states that
Energy=hf where h is Planck's constant (h = 6.63E-34 Js), that is the frequency
band is not continuous, but rather exists in digital steps (quantised) in exact
factors of h. This wave-particle duality characteristic, together with the
uncertianty principle and quantisation, have totally upset most of the past
well known physicists, and although both wave and particle characteristics have
been experimentally confirmed, it is still very unclear what's the mechanism
undergoing within matter or wave.
Some physicists, attempting to unify gravity with the other fundamental forces, have come to a startling prediction: every fundamental matter particle should have a massive "shadow" force carrier particle, and every force carrier should have a massive "shadow" matter particle. This relationship between matter particles and force carriers is called supersymmetry. For example, for every type of quark there may be a type of particle called a "squark." Again, such reasoning is highly distorted due to the hard particle paradigm. In another separate experiment, researchers led by Valery Nesvizhevsky at the Laue-Langevin Institute in France, isolated hundreds of neutrons from all major effects except gravity, then watched them in a special detector as gravity pulled them down. It was not a smooth fall! As expected by the standing wave theory, the neutrons fell in quantum jumps. This confirms that particle motion in the macro world is not a continuous process! So we see that hard particles and their motion gets quite weird, with quantum movement and imaginary components to describe their motion. This is a strong evidence that the reality based on hard particles moving in space is totally wrong, and has resulted into a whole mess of incompatible scientific fields. Contrary to what our senses make us believe, no experiment has ever shown that hard particles either exist or move! But, on the other hand, we have experiments that show particles disappearing from one place to appear in another place without 'moving' along a path, somehow like being teletransported.
For too many years people imagined atoms as point electrons orbiting around a nucleus. This myth, obviously imitating our planetary system, was shown wrong by quantum theory more than sixty years ago, and despite this fact, it is still the first basic model that students are exposed to in some schools. For example, in the hydrogen atom, quantum theory predicts the electron presence as a symmetrical spherical cloud around the proton. Some physicists, still under the effect of the 600-year-old old myth, concluded that the point bits of matter were still there, even though quantum theory contains no notion of point particles, just because they insist that matter has to be made up of smaller and smaller matter. Actually, in the hydrogen atom both the electron wave-structure and the proton have the same center. As described in my 'Particle' section, both nucleus and electron's structure can be imagined like onion layers - spherical concentric layers of electromagnetic waves around a center. The amplitude of the EM waves decreases, as shown in the graph below. There are no point masses, no orbits - just waves.
As you can read in 'The Particle' section, the spherical standing wave concept of matter solves all Quantum Theory enigmas and more. The spherical standing wave concept, based on matter structured of concentric spherical polyhedra, avoids and explains the paradoxes and problems of hard point particles. In such a theory, mass and charge simply do not exist in nature, and eliminating them from particle structure also gets rid of their problems. As a matter of fact, this theory has only one property as a priori - space together with its built-in characteristics. Instead of mass, charge and time, we have wave nodes and their motion. Standing waves in space possess the properties of mass and charge which we observe in the macro world, but without the eternal problem of finding mass points which do not exist. This simple theory is thus valid from the quantum level to the whole universe, unifying quantum theory with nature. The overwhelming proof of the standing wave structure of matter is the discovery that all the former empirical natural laws originate from the wave structure. In this theory, all things from quantum level to the universe itself obey the same laws, and the shelled spherical polyhedra standing wave structure appears to agree with experimental observations, and can be used to explain such things as nucleus magic numbers, by simply studying their geometrical arrangement.
The above diagram (point with mouse to start/refresh)
shows the amplitude of EM waves as they reach the centre of the spherical wave.
The right most side is the nucleus core, and the X-axis represents the distance
from the core or radius of the sphere. In the top diagram, observe the wave
moving inward. Where the radius is zero the amplitude is infinite. In the
second diagram, observe the wave moving outward. Again, where the radius is
zero, the amplitude becomes infinite. This infinity value at the core does not
actually occur in practice, as the radius can never be smaller than Planck's
length, so Lp-1 may never reach infinity and thus keep
the model valid.
The lower diagram is the resultant of the two waves. That is, the two
amplitudes are added together, by the superposition property of waves. The sum
of the waves is the radial amplitude of the real electron. Watch carefully how
the sum wave moves. The wave does not move inward or outward, it goes up and
down. That is, it becomes a standing wave. Standing waves with nodes fixed in
space, made up of incoming and outgoing EM travelling waves - and this is what
we usually call particles of matter. The nucleus, electron, and all the myriad
of other particles, are just structures of standing spherical quantum waves.
A wave may either be a travelling wave or a standing wave which is fixed in
space. This means that matter is a structure of EM waves, not just a simple
concentration of EM waves, but a tuned standing wave structure. In this respect
Einstein's equation E=mc2 is quite misleading, because the equation,
although mathematically correct, gives no indication of the structure of E in
order to get a resulting mass. In fact, any attempt to concentrate huge
quantities of energy to generate mass have been a failure. A resonant standing
wave is a priori to generating any form of matter from pure energy, and we all
know that the building block of a standing wave are in- and out-going waves.
Once this matter standing wave structure is broken into smaller structures or
even destroyed, the EM elements making it up are released and detected as
travelling EM waves or other 'chunks' of smaller standing waves. What all these
new ideas seem to suggest is that physical objects (matter), or even reality
itself (things in motion), are not at all what everyone had supposed they are
4000 years ago, and it is surely about time that current science makes up for
this, even if this comes at the cost of rebuilding from scratch science itself.
In this picture, anything in existence in our 3D universe forms part of a
single entity, a single, though complex, pattern of standing waves. The outward
waves from a body evoked a response of the universe; that is, the production of
inward waves from reflecting bodies elsewhere in the universe. However, the
reflected waves began before the moment of acceleration and before arrival of
the source waves. The combined waves themselves are the particles, and no point
mass or charge is needed. Every charged particle is a structural part of the
universe, and the whole universe contributes to each charged particle. Every
particle sends quantum waves outward, and receives an inward response wave from
the universe. Inward and outward waves, although both spherical, are not the
same function, in fact they are different for each particle. Although the
variety of molecules and materials populating the universe is enormous, the
building bricks are just two, a spherical In-Wave and a spherical Out-Wave.
The first hint of the mechanism of cosmological energy transfer was Ernst
Mach's observation in 1883. He noticed that the inertia of a body depended on
the presence of the visible stars. He asserted that "Every local inertial
frame is determined by the composite matter of the universe" and jokingly,
"When the subway jerks, it is the fixed stars that throw us down."
How can information travel from here to the stars and back again in an instant?
Machs' principle was criticised because it appeared to predict instantaneous
action-at-a-distance across empty space. As Einstein observed; "Forces
acting directly and instantaneously at a distance, as introduced to represent
the effects of gravity, are not in character with most of the processes
familiar to us from everyday life." Space is not empty because, although
not observed easily, it is the quantum wave medium produced by waves from every
particle in the universe, as predicted by Machs' Principle long ago. The energy
exchange of inertia, charge, and other forces are mediated by the presence of
the space medium. There is no need to 'travel' across the universe. Special
relativity is founded on the basis of the law of the constancy of the velocity
of light. But the general theory of relativity cannot retain this law. On the
contrary, according to this latter theory the velocity of light must always
depend on the coordinates when a gravitational field is present."
The spherical IN and OUT waves of the source and receiver oscillate in two-way
communication , until a minimum amplitude condition is obtained (i.e. Resonant
Coupling). The decrease of energy (frequency) of the source will equal the
increase of energy of the receiver. Thus energy is conserved. E-M waves are
observed as a large number of such quantum changes.
Three hundred years ago, Christiaan Huygens, a Dutch mathematician, found that
if a surface containing many separate wave sources was examined at a distance,
the combined wavelets appeared as a single wave front. This wave front is
termed a Huygens Combination of the separate wavelets. This mechanism is the
origin of the in-waves, whereby our In-Waves are formed from a Huygens'
Combination of the Out-Waves of all the other matter in the universe. This
occurs throughout the universe, so that every particle depends on all others to
create its in-wave. We have to think of each particle as inextricably joined
with other matter of the universe. Although particle centers are widely
separated, all particles are one unified structure. Thus, we are part of a
unified universe, and the universe is part of us.
Standard units to Spacetime conversion table
Leading the way to unification
One of the most powerful mathematical
tools in science is dimensional analysis. Dimensional analysis is often applied
in different scientific fields to simplify a problem by reducing the number of
variables to the smallest number of "essential" parameters. Systems
which share these parameters are called similar and do not have to be studied
separately. Most often then not, two apparently different systems are shown to
obey the same laws and one of them can be considered to be analogous to the
other.
The dimension of a physical quantity is the type of unit needed to express it.
For instance, the dimension of a speed is distance/time and the dimension of a
force is mass×distance/time². Conventionally, we know that in mechanics, every
physical quantity can be expressed in terms of MLT dimensions, namely mass,
length and time or alternatively in terms of MLF dimensions, namely mass,
length and force. Depending on the problem, it may be advantageous to choose
one or the other set of fundamental units. Every unit is a product of (possibly
fractional) powers of the fundamental units, and the units form a group under
multiplication.
In the most primitive form, dimensional analysis is used to check the
correctness of algebraic derivations: in every physically meaningful
expression, only quantities of the same dimension can be added or subtracted.
The two sides of any equation must have the same dimensions. Furthermore, the
arguments to exponential, trigonometric and logarithmic functions must be
dimensionless numbers, which is often achieved by multiplying a certain
physical quantity by a suitable constant of the inverse dimension.
The Buckingham p theorem is a key theorem in dimensional
analysis. The theorem states that the functional dependence between a certain
number n of variables can be reduced to the number of k independent dimensions
occurring in those variables to give a set of p = n - k independent,
dimensionless numbers. A dimensionless number is a quantity which describes a
certain physical system and which is a pure number without any physical units.
Such a number is typically defined as a product or ratio of quantities which DO
have units, in such a way that all units cancel. A system of fundamental
units (or sometimes fundamental dimensions) is such that every other unit can
be generated from them. The kilogram, metre, second, ampere, Kelvin, mole
and candela are supposed to be the seven fundamental units, termed SI base
units; other units such as the newton, joule, and volt can all be derived from
the SI base units and are therefore termed SI derived units. The choice of
dimensionless units is not unique: Buckingham's theorem only provides a way of
generating sets of dimensionless parameters, and will not choose the most
'physically meaningful'.
Why not choose SI ?
We know that
measurements are the backbone of science. A lot of work has been done to get
the present self-coherent SI system of physical
parameters, so why not choose SI as the foundation of a unifying theory?
Because if the present science is not leading to unification, it means that
something in its foundations is really wrong, and where else to start searching
if not in its measuring units. The present SI system of units have been laid
out over the past couple of centuries while the same knowledge that generated
them in the first place have changed, making the SI system more or less a
database of historical units. The major fault in the SI system can be easily
seen in the relation diagram shown here, officially issued by BIPM (Bureau International des Poids
et Mesures). We just added the 3 green arrows for the Kelvin unit.
One would expect to see the seven base units totally isolated, with arrows
pointing radially outwards towards derived units, instead, what we get is a
totally different picture. Here we see that the seven SI base units are not
even independent, but totally interdependent like a web, and so do not even
strictly qualify as fundamental dimensions. If for instance, one had to change
the definition of the Kg unit, we see that the fundamental units candela,
mole, Amp and Kelvin would change as well. In the original diagram issued
by BIPM, the Kelvin was the only isolated unit, but as I will describe shortly,
it should be well interconnected as shown by the additional green arrows. So
one cannot say there are seven fundamental SI units if these units are not
independent of each other. The other big fault is the obvious redundancy of
units. Although not very well known to all of us, at least two of the seven
base units of the SI system are officially known to be redundant, namely the
mole and the candela. These two units have been dragging along, ending up in
the SI system for no reason other than historic ones.
The mole is merely a certain number of atoms or molecules, in the same sense
that a dozen is a number; there is no need to designate this number as a unit.
The candela is an old photometric unit which can easily be derived from
radiometric units (radiated power in
Temperature, is yet another base unit that can be made redundant by adopting
new definitions for its unit. Temperature could be measured in energy units
because, according to the equipartition theorem, temperature is proportional to
the energy per degree of freedom. It is also known that for a monatomic ideal
gas the temperature is related to the translational motion or average speed of
the atoms. The kinetic theory of gases uses statistical mechanics to relate
this motion to the average kinetic energy of atoms and molecules in the system.
For this case 11605 degrees Kelvin corresponds to an average kinetic energy of
one electronvolt, equivalent to 1.602E-19Joules. Hence the Kelvin could also be
defined as a derived unit, equivalent to 1.3806E-23Joule per degree of freedom,
having the same dimensions of energy. Every temperature T has associated with
it a characteristic amount of energy kT which is present in surroundings with
that temperature at the quantum and molecular levels. At any given temperature
the characteristic energy E is given by kT, where k (=1.3806E-23m2kg/sec2/K)
is Boltzmann constant which is nothing more than a conversion factor between
characteristic energy and temperature. Temperature can be seen as an
alternative scale for measuring that characteristic energy. The Joule is
equivalent to Kg/m2/sec2, so for the Kelvin unit we had
to add the three green arrows pointing from Kg, metres and seconds which are
the SI units defining energy. Furthermore, the definitions of the supplementary
units, radian and steradian, are gratuitous. These definitions properly belong
in the province of mathematics and there is no need to include them in a system
of physical units. So what are we left with? How many dimensions can the SI
system be reduced to? Looking again at the SI relations diagram, let us see
which units DO NOT depend on others, that is which are those having only
outgoing arrows and no incoming arrows. We see that in the SI system, only the
units Seconds and Kg are independent. So, this means that the SI system can be
reduced to no more than two dimensions, without loosing any of its physical
significance of all the involved units. But we know that there are a lot of
other combinations that can lead to the same number of fundamental dimensions,
and that Kg and Seconds might not be the most physically meaningful independent
dimensions. Strictly speaking only Space and Time are fundamental dimensions
.... so what are the rest? Just patches in physics covering our ignorance, our
inability to accept that point particles, with the fictitious Kg dimension, do
not exist.
Present maintenance and transitions in the metric SI system of units
Yes, hard to believe
but true! Even though such transitions are hard to implement and the inertia of
the SI system of units is huge, a few transitions towards better definitions
are succesfully finding their way into the present SI metric system, so all is
not lost. On such idea is the transition towards definitions based solely on
the unit of time, taking the atomic clock second as reference and adopted exact
values of certain constants. A notable step was taken in 1983 when the meter
was defined by specifying that the standard speed of light be exactly 299792458
meters per second. In 1990 the BIPM established its voltage standard by
specifying that Josephson's constant be exactly 483597.9 billion cycles per
second per volt. Although this standard is already in use the official
definition for voltage has not yet been changed to be consistent with the
method of measurement, leaving the voltage and related quantities in a state of
patchwork. In 1999 the CGPM called for a redefinition of the kilogram along the
lines of the 1990 standards, and the following year two leading members, Mohr
and Taylor, supplied the following proposed redefinition: The kilogram is
the mass of a body at rest whose equivalent energy equals the energy of a
collection of photons whose frequencies sum to 135 639 274×1042 Hz. Mohr
and Taylor also suggested that the larger Planck's constant be made exactly
equal to 2997924582/135 639 274 × 10-42 joule second. This value follows from
their suggested definition of the kilogram.
Reference: Redefinition of the kilogram: a
decision whose time has come by the
Introducing the ST system of units
Here we will go a step
further over the conventional SI dimensions and its patch work and will further
reduce all scientific units into the real fundamental dimensions, namely Space
(metres) and Time (seconds). As shown in this diagram, all SI units have been
re-mapped onto the two fundamental units. We can therefore re-map the rest of
the SI-derived units onto onto our ST system as well. At first it seemed an
impossible mission, but as we went through all equations currently known, we
found out that we've got a lot of different branches of science that are
equivalent to each other. We start off with the dimensions of distance as the
one dimensional unit of Space S, area becomes the 2 dimensional unit of space S2,
volume becomes the 3 dimensional unit of space S3, speed is
distance/time which becomes S/T. To move onward to define energy related units,
we used our knowledge of the standing wave EM structure of matter, which
enabled us to continue the conversion work on parameters in all the other
fields. Surprisingly, I have been able to put up a full self-coherent table of
ST dimension conversions for all known phyisical quantities in less than a day,
while eliminating all the non-sense webbing of the conventional SI system.
Such a table sets up a much stronger foundation for a new science, and helps
you visualise how scientific parameters relate to each other through space and
time. Quoting John Wheeler, "There is nothing in the world except empty curved
space" and "Matter, charge, electromagnetism and other fields are
only manifestations of the curvature of space." Once you grasp the whole
concept, you will easily understand why RC is a time constant, why mass is a
volume of energy, why f=1/2pi √(LC), and how all 'mechanical' &
Parameter |
Units |
SI units |
ST Dimensions |
Distance S |
metres |
m |
S |
Area A |
metres square |
m2 |
S2 |
Volume V |
metres cubed |
m3 |
S3 |
Time t |
seconds |
s |
T |
Speed/ Velocity u |
metres/sec |
m/s |
ST-1 |
Acceleration a |
metres/sec2 |
m/s2 |
ST-2 |
Force/ Drag F |
|
Kgm/s2 |
TS-2 |
Surface Tension g |
|
Kg/s2 |
TS-3 |
Energy/ Work E |
Joules |
Kgm2/s2 |
TS-1 |
Power P |
|
m2 Kg/s3 |
S-1 |
Density r |
kg/m3 |
kg/m3 |
T3 S-6 |
Mass m |
Kilogram |
Kg |
T3 S-3 |
Momentum p |
Kg metres/sec |
Kgm/s |
T2 S-2 |
Impulse J |
|
Kg m/s |
T2 S-2 |
Moment m |
|
m2 Kg/sec2 |
T S-1 |
Torque t |
Foot Pounds or Nm |
m2 Kg/sec2 |
T S-1 |
Angular Momentum L |
Kg m2/s |
Kg m2/s |
T2 S-1 |
Inertia I |
Kilogram m2 |
Kgm2 |
T3 S-1 |
Angular velocity/frequency w |
Radians/sec |
rad/sec |
T-1 |
Pressure/Stress P |
Pascal or N/m2 |
Kg/m/sec2 |
T S-4 |
Specific heat Capacity c |
J/kG/K |
m2/sec2/K |
S3 T-3 |
Specific Entropy |
J/kG/K |
m2/sec2/K |
S3 T-3 |
Resistance R |
Ohms |
m2Kg/sec3/Amp2 |
T2 S-3 |
Impedance Z |
Ohms |
m2Kg/sec3/Amp2 |
T2 S-3 |
Conductance S |
Siemens or Amp/Volts |
sec3 Amp2/Kg/m2 |
S3 T-2 |
Capacitance C |
Farads |
sec4Amp2/Kg/m2 |
S3 T-1 |
Inductance L |
Henry |
m2 Kg/sec2/Amp2 |
T3 S-3 |
Current I |
Amps |
Amp |
S T-1 |
Electric charge/flux q |
Coulomb |
Amp sec |
S |
Magnetic charge/flux f |
Weber or Volts Sec |
m2 Kg/sec2/Amp |
T2 S-2 |
Magnetic flux density B |
Tesla /gauss/ Wb/m2 |
Kg/sec2/Amp |
T2 S-4 |
Magnetic reluctance R |
R |
Amp2 sec2/Kg/m2 |
S3 T-3 |
Electric flux density |
Jm2 |
Kg m4/sec2 |
ST |
Electric field strength E |
N/C or V/m |
m Kg/sec3/Amp |
T S-3 |
Magnetic field strength H |
Oersted or Amp-turn/m |
Amp/m |
T-1 |
Poynting vector S |
Joule/s/m2 |
Kg/sec3 |
S-3 |
Frequency f |
Hertz |
sec-1 |
T-1 |
Wavelength l |
metres |
m |
S |
Voltage EMF V |
Volts |
m2 Kg/sec3/Amp |
T S-2 |
Magnetic/Vector potential MMF |
MMF |
Kg/sec/Amp |
T2 S-3 |
Permittivity e |
Farad per metre |
sec4 Amp2 /Kg/m3 |
S2 T-1 |
Permeability m |
Henry per metre |
Kg m/sec2/Amp2 |
T3 S-4 |
Resistivity r |
Ohm metres |
m3Kg/sec3/Amp2 |
T2 S-2 |
Temperature T |
° Kelvin |
K |
T S-1 |
Enthalpy H |
Joules |
Kgm2/s2 |
T S-1 |
Conductivity s |
Siemens per metre |
Sec3Amp2 /Kg/m3 |
S2 T-2 |
Thermal Conductivity |
W/m/° K |
Kg m /sec3/K |
S-2 |
Energy density |
J/m3 |
Kg/m/sec2 |
T S-4 |
Ion mobility m |
Metre2/ Volts seconds |
Amp sec2/Kg |
S4 T-2 |
Radioactive dose Sv |
Sievert or J/Kg |
m2/s2 |
S2 T -2 |
Dynamic Viscosity |
Pa sec or Poise |
Kg/m/s |
T2 S-4 |
Fluidity |
1/Pascal second |
m sec/Kg |
S4 T-2 |
Effective radiated power ERP |
Watts/m2 |
Kg/m/sec3 |
S-3 |
Luminance |
Nit |
Candela/m2 |
S-3 |
Radiant Flux |
|
Kg.m/sec3 |
S-1 |
Luminous Intensity |
Candela |
Candela |
S-1 |
Gravitational Constant G |
Nm2/Kg2 |
m3/Kg/s2 |
S6 T-5 |
Planck Constant h |
Joules second |
Kg m2/sec |
T2 S-1 |
Coefficient of viscosity h |
n |
Kg/m/s |
T2 S-4 |
Young's Modulus of elasticity E |
N/m2 |
Kg/m/s2 |
T S-4 |
Electron Volt eV |
1eV |
Kg m2/sec2 |
T S-1 |
Hubble constant Ho |
H |
Km/sec/Parsec |
T-1 |
Stefan's Constant s |
W/m2/K4 |
Kg/s3/m/K4 |
S T-4 |
Strain e |
S0 T0 |
||
Refractive index h |
S0 T0 |
||
Angular position rad |
Radians |
m/m |
S0 T0 |
Boltzmann constant k |
Erg or Joule/Kelvin |
Kg.m2/s2/K |
S0 T0 |
Molar gas constant R |
J/mol/Kelvin |
Kg.m2/s2/K |
S0 T0 |
Mole n |
Mol |
Kg/Kg |
S0 T0 |
Fine Structure constant a |
S0 T0 |
||
Entropy S |
Joule/Kelvin |
Kg.m2/s2/K |
S0 T0 |
Reynolds Number Re |
S0 T0 |
||
|
S0 T0 |
If anyone wants to add any missing
parameter or knows any known equation that invalidates any of the above
conversions please let me know. Here is a simple example showing you how to
validate any equation into ST dimensions:
Equation to test : Casimir force F= hcA/d4
Convert each parameter to its ST dimensions from the table:
F= force= T S-2
c= speed of light= S T-1
h= Planck's constant = T2 S-1
A= Area = S2
d= 1d space = S
So the equation becomes:
T S-2 = T2 S-1 * S T-1 * S2
* S-4= T(2-1) S(-1+1+2-4)
T S-2 = T S-2... dimensionally correct.
Where does our scientific
knowledge stand ?
The above conversion table makes a few things quite obvious. Since S has been
defined as space in one dimension (line), S2 defines a 2D plane, S3
defines a 3D volume, and so forth, we might wonder why terms to the 6th power
should exist, and what is the significance of the negative powered dimensions.
As discussed in the section Higher dimensional space, all clues point
towards an ultimate fractal spacetime dimension slightly higher than 7. So, one
should really expect physical parameters with space dimensions up to 7. Now, to
the difficult part... time dimensions. We are normally used to talk about 3D
space + 1D time, I have also introduced n*D space + 1D time in the 'Existence
of higher dimensions' section. Our mind is limited to perceive everything in
one 'time vector', that is one continous time line arrow, having direction from
past to future. Most of the readers that went through the mentioned sections
would have probably already had a hard time trying to perceive higher space
dimensions a seen from such a single time vector. However, physical parameters
which effect our universe, do not neccessarily exist in this single timeline,
and this can be easily seen from those parameters having powers on their T
dimension different than unity. The table below, shows more clearly, all known
physical parameters in terms of their space & time dimensions. As expected,
all known parameters fit into a 7D Spacetime. As you know, I have often
referred to a self observing universe, and the negative powered dimensions are
a consequence of this observation. If the observer 'lives' in a 1D timeline,
then he can observe the surrounding space with respect to T, so we are able to
observe space S but also to observe S with respect to T = dS/dT = velocity = S
T-1. So, you see that although we write T -1, we are here
differenciating S by T+1, so the + and - only indicate in which
dimension (S or T) is the observer residing. Since the observer is always
differenciating a non-negative dimension, one of the ST dimensions should
always be powered to a number greater or equal to zero. We now use this
knowledge to conclude that all physical parameters are a combination of
observing space time from different dimensions of space and time, and can all
fit into a table holding 7D spacetime. The table below shows the result after
crossing each box of each known physical parameter from the table above. No
parameters will fit in the shaded block as this area denotes observation of
negative dimensions which cannot exist. This table actually shows us where our
scientific knowledge stands, counting the checked boxes one gets only 17% of
the whole table, which took humanity a few millions of years to find out. Once
we will be able to fill up the complete table, we would know how to
inter-relate all dimensions of our unified universe. Up to that day, no student
(or lecturer) can ever think that the existing explanations of science are
final and that after reading all his textbooks or finishing his course of
study, he should go away satisfied with his wisdom !
T -7 |
T -6 |
T -5 |
T -4 |
T -3 |
T -2 |
T -1 |
T 0 |
T 1 |
T 2 |
T 3 |
T 4 |
T 5 |
T 6 |
T 7 |
|
S 7 | |||||||||||||||
S 6 |
X | ||||||||||||||
S 5 | |||||||||||||||
S 4 |
X | ||||||||||||||
S 3 |
X |
X |
X |
X | |||||||||||
S 2 |
|
X |
X |
X | |||||||||||
S 1 |
X |
X |
X |
X |
X | ||||||||||
S 0 |
X |
X |
X | ||||||||||||
S -1 |
X |
X |
X |
X | |||||||||||
S -2 |
X |
X |
X | ||||||||||||
S -3 |
X |
X |
X |
X | |||||||||||
S -4 |
X |
X |
X | ||||||||||||
S -5 | |||||||||||||||
S -6 |
X | ||||||||||||||
S -7 |
Science tail chasing
... the mechanism that guarantees getting to nowhere
The space-time conversion
table shown in the previous page, is a great leap towards unification, and
makes obvious the redundancy of the conventional scientific laws just by a
general approach to its foundations - its measuring system. If the measuring
system of a science is full of redundant units, then, it surely means that much
of the laws based on those units are redundant or circular.
The notion of redundancy of the scientific laws has been well expressed by the
late Professor JL Synge and made public in the series of lectures at the Dublin
Institute of Advanced Studies delivered in 1949. I would especially like to
thank Frank Grimer of the Vortex-L discussion group, for sharing with me his
own work and bringing to my attention the mathematical work of Jeans J, from
'An introduction to the Kinetic theory of gases', Cambridge Univ press 1960.
Quoting Synge in the following passage:
..... Thought is difficult and painful. The difficulties and pain
are due to confusion. From time to time, with enormous intellectual effect,
someone creates a little order - a small spot of light in the dark sea of
confusion. At first we are all dazzled by the light because we are used to
living in the darkness. But when we regain our senses and examine the light we
find it comes from a farthing candle - the candle of common sense. To change
the metaphor, the sages chase their own tails through the ages. A little child
says 'Gentlemen, you are chasing your own tails.' The sages gradually lose
their angular momentum, and, glancing over their shoulders, see what they are
persuing. But most of them cannot believe what they see, and the tail chasing
does not die out until a generation has passed.....
Forty years ago Schroedinger wrote (in his article recently
reprinted in the Special Issue 1991 of Scientific American, "Science in
the 20th century", p.16):
"Fifty years ago science seemed on the road to a clearcut
answer to the ancient question which is the title of this article [Our
Conception of Matter]. It looked as if matter would be reduced at last to its
ultimate building blocks - to certain submicroscopic but nevertheless tangible
and measurable particles. But it proved to be less simple than that. Today a
physicist no longer can distinguish significantly between matter and something
else. We no longer contrast matter with forces or fields of force as different
entities; we know now that these concepts must be merged... . We have to admit
that our conception of material reality today is more wavering and uncertain
than it has been for a long time. ... Physics stands at a grave crisis of
ideas. In the face of this crisis, many maintain that no objective picture of
reality is possible. However, the optimists among us (of whom I consider myself
one) look upon this view as a philosophical extravagance born of despair. We
hope that the present fluctuations of thinking are only indications of an
upheaval of old beliefs which in the end will lead to something better than the
mess of formulas that today surrounds our subject."
It is astonishing, but also frustrating, to see how topical are the remarks
still today. Weinberg, Feynman, Wolff and certainly other well known science
explorers, have more than once drawn our attention to the same inadequate
foundations for natural laws.
In my ST table together with the description of the fractal model of the atom
described in the particle section, I tried to show the head and tail of
science. As you should have followed, the units candela, Kg, mole, Ampere and
Kelvin are the teeth holding tight the tail of science. Our present science
knowledge books and lectures are the force driving the circular motion of the
tail chasing. The conversion table stops this vicious loop in quite an abrupt
way and attempts to put back some order.
Of course, most of you do not like what they see, and argue that the tail they
are chasing is not theirs. But let's stop with metaphors, and try to explain it
with some elementary physics.
What looks so unconventional in the ST unification table is the fact that
matter is a 3D version of energy, and that energy or 1D mass, is the inverse of
velocity. Once cleared these two weird links, it becomes immediately clear that
the ST table should be the real fundamental measuring system of science.
Let's start from what everybody should know: 1D space dimension S is a unit of
length, and 1D time dimension T is a unit of time. It also follows that the
unit of velocity should be S/T and that of acceleration is ST-2.
Also the second dimension of space is not 2S but S2. Now, anybody
who tried out known equations and worked out their dimensions according to the
ST table, would agree, that the rest of the table is to say the least SELF
COHERENT, but the link between length, time, velocity or acceleration to energy
and all the rest of the parameters may not be obvious. For this analysis I've
used the quite elementary yet powerful equations of motion given by Jeans J. in
his introduction to the Kinetic Theory of gases, and will try to derive the
mass unit in its one dimensional form, in terms of length and time.
We will here consider the impact of two elastic bodies masses m1, m2
in a simple 1 dimensional space. The velocities before impact are u1,
u2 respectively. The velocities after impact are v1, v2.
Since we will consider mass in one dimension (a point moving along a line), we
will assume movement is taking place only in the x-direction, to the left and
right. You can choose any x-direction of motion to be positive velocity and the
other will be the negative.
1D Hierarchical conservation of momentum
We'll here consider a totally isolated system, in which we know that total
system momentum is conserved. The momentum lost by one object is equal to the
momentum gained by another object. For collisions occurring in an isolated
systems, there are no exceptions to this law.
momentum before impact
= momentum after impact
m1u1 + m2u2 = m1v1
+ m2v2 .....(1)
Looked at hierarchically, velocity may be viewed as existing at two levels, a high order velocity V averaged over equal intervals of time before and after impact and defined by the equation:
V = 1/2 (u1 + v1) = 1/2 (u2 + v2) .....(2)
and low order velocities obtained by subtracting the high order velocity, V, from the individual velocities, u1, u2, v1, v2:
m = u1 - V
.....(3)
m = u2 - V
.....(4)
u = v1 - V
.....(5)
u = v2 - V
.....(6)
From equation (2):
m u u m
The individual velocities can now be seen as the sum of the low order, 'within batch' velocities m m u u and the higher order, 'between batch' velocity V. Now from equations (3) to (8):
u1/m + u2/(-m ) = v1/(-u ) + v2/u
Substituting from equations (7) & (8) and re-arranging:
m u1 + (1/u u2 = (1/m v1 + (1/u v2 .....(10)
Equation (10) is isomorphic to the equation of conservation of momentum,
equation(1):
m1u1 + m2u2 = m1v1 + m2v2 .....(1)
The 1D masses m1
and m2 have been replaced by the reciprocal internal 1D velocities
(1/m ) and (1/u ).
Numerically, these reciprocal terms will differ from the mass values in Kg
units, for the reason that the kg SI unit is an arbitrary unit defined in 3D,
whereas the reciprocal terms are in seconds per metre units. This implies that
the 1D form of mass has dimensions (S/T)-1 or T/S. The concept of 3D
mass can thus be replaced by the concept of reciprocal 3D internal velocity
both at the macro and the micro scale, leading to a 3D mass dimension of T3/S3.
The concepts of stepping up dimensions can be easily understood when one
considers any spacetime unit to be a ratio of two spatial dimensions. We can
easily understand that 2D space is S2, 3D space is S3.
This rule applies to the spatial time dimension as well as to combination units
as velocity, and mass. For example, the nth dimensional unit of a spacetime
parameter SxTy will be equal to SnxTny.
Thus units for different dimensions of mass will be of the form Tn1S-n1
all being the same entity in different dimensions. The Newtonian Kg is just one
of these entities for the condition n=3, giving the 3D version of mass,
spacetime dimension T3/S3.
From the kinetic energy equation E=1/2mv2, we get E= T3S-3*S2T-2=
T/S, re-confirming Einstein's statement : 'It followed from the special
theory of relativity that mass and energy are both but different manifestations
of the same thing -- a somewhat unfamiliar conception for the average mind'. One
could easily replace the Kg by Joules3 by simply introducing a
conversion dimensionless factor between the two units. It is quite impressive,
that we arrived to the same conclusion, without reference to Einstein's
equations or special theory of relativity. All we did was in fact equating
velocities in the elementary equation of conservation of momentum.
Other ST units can be easily derived as follows:
Planck constant h= E/f = T/S * T = T2/S
From E=mc2, m = E/c2 = T/S * (T/S)2 = T3S-3
For momentum = mv = T3/S3*S/T = T2S-2
For angular momentum L = mvr = T3/S3*S/T*S = T2/S
... same as Planck constant
For Moment of Inertia I=L/w = T2/S * T = T3/S
From F=ma, we get Force= T3S-3*S T-2= TS-2
Electromotive force (Voltage) = TS-2
For power, P=Fv, we get P=TS-2*S/T = S-1
For current, I= P/V = S-1/(TS-2) = S/T
For resistance, R = V/I = TS-2/ (S/T) = T2S-3
For mass flow rate mdot = dm/dt = (T3S-3)/ T = T2S-3
For Pressure = F/A = TS-2*S-2 = TS-4
Frequency = v/l= S/T*S-1= T-1
Temperature = E/k = T/S * 1 = T/S
For charge q=It = S/T * T = S
For Capacitance C = Q/V = S/(TS-2) = S3T-1
From V=L(dI/dt), Inductance L= TS-2 * T * T/S = T3/S3...same
as mass!
Interesting things to note:
Comparing with V= L(dI/dt), where voltage has the same dimensions of force,
Inductance the same dimensions of mass and current same dimensions of velocity.
It is clear that the equations are actually the same, and that V=L(dI/dt) is
actually
Power = Force * Velocity
Comparing with Power = V*I, where voltage has the same dimensions as force, and
current has dimensions of velocity. Again it's the same equation.
Kinetic energy = 1/2mv2
Energy stored in inductor = 1/2LI2, where L has dimensions of mass
and I of velocity.
Work (Energy) = Force * distance
Compare with Energy = Vq, where voltage has dimensions of force, and charge
dimensions of length.
Now compare the time constant of a simple
pendulum given by (L/g)1/2
If we replace pendulum length L by charge (dimension S), and gravitational
acceleration by (a=dv/dt=dI/dt) current acceleration, we have:
Time constant = (qT/I)1/2... but q=CV and R=V/I so:
T=(RCT)1/2
(T)1/2= (RC)1/2
T=RC... time constant for RC circuit...derived from mechanical pendulum
From Force = Rate of change of momentum = m
(dv/dt)
Compare to EMF = L (dI/dt)... means that product LI is in fact the momentum of
the electrical system.
Energy = Force * distance
Energy = ma * d
E = mvd/t .... but mvd is momentum*distance which has the same dimensions as
Energy*time same as the well known quantum of action : Planck constant h, so:
E = h/t .... 1/t= frequency, thus
E = hf
From
rocket equation : Thrust = Velocity * Mass flow rate
Replacing Thrust (force) by voltage, velocity by current, and mass flow rate by
resistance, we get Ohms Law:
V = IR .... so, Ohms law is nothing more than the rocket thrust equation and
shows that a resistor controls MASS flow rate NOT charge flow rate. This
clearly shows one of the major misconceptions of the present electrical theory,
in which it is assumed that a resistor has an effect on charges, which is
clearly not the case. Resistance is in fact acting on the MASS of the flowing
electrons and not on their charge.
A note on h and h-bar
Arguments showing why h-bar
(Dirac's constant )
should NOT be used to derive Planck units
Unfortunately, a lot of scientific literature state Planck units expressed in
terms of (=h/(2p))
known as Dirac's constant, or the reduced Planck's constant. THIS IS INCORRECT.
The 2p factor in fact leads to totally different (and wrong)
numeric values for Planck units, than the original values set out by Planck
himself. The 2p factor is a gratuitous addition, coming from
the failure to address the Hydrogen atom's stable orbits as defined by the
orbital path length being an exact multiple of the orbital matter (standing
wave) wavelength.
The statement that the
orbital electron's angular momentum is quantised as in:
m.v.R = n.(h/2p) = n.
for integer values of n, is just a mis-statement of
2p.R = n.h/(mv) .... which when substituting for h=E/f, v=f.l, and
m=E/(f.l)2... we get:
2p.R = n.l ..... which means that the 2p
factor has nothing to do with h as such, and that the orbital path is just an
integer number of wavelengths as described by Louis De Broglie! (see diagram
above). Dirac's was
thus defined due to lack of understanding of the wave structure
of matter, and its use should be discouraged.
Some physicists still prefer
to use h-bar, not for any scientific reason, but mostly for the sake of
simplicity in their calculations. Their main point of view about the argument
is that preferring h to h-bar amounts to preferring a circle whose
circumference is 1 to a circle whose radius is 1, and that setting h equal to 1
instead of hbar = 1 amounts to working with a circle of unit circumference
instead of unit radius. Though this may look simple and true when one views the
problem in euclidean (plane) geometry, one has to keep in mind that the
euclidean geometry is only an approximation to the properties of
physical space, and Einstein showed that space gets elliptically curved
(non-Euclidian) in the regions where matter is present. The shortest path in a
non-euclidean space is a curved path, and though it does not seem logical, the
straight line joining two points may be a longer way to go than the curved path
between the same two points. The matter wave (De Broglie wave) shown above is
not being forced to loop round the circle, it is just following the easiest and
shortest path in its non-euclidean space. Planck's work was not about
electromagnetic waves travelling in free space, in which euclidean geometry is
a good approximation, but on the interaction of such waves with
matter. Matter plays an important role in all Planck's work, and thus, a
non-Euclidian space has to be preferred for all Planck units, and so, a
circumference value must be used in favour of a radius value as the shortest
length, whether or not normalised to unity.
For this reason, in all my work, I've chosen to use the original Planck units
which are expressed in terms of h, Planck constant. The following derived
values in fact are in perfect agreement with Planck's original values. Using
the original Planck values for S (Lp) and T (tp), and
simply plugging their value in the ST system of units, based on h, one can in
fact DERIVE the numeric values for constants such as free space impedance, Von
Klitzing constant, Quantum conductance, Josephson constant and more (see next
page). If one tries to do the same thing using the numerical values for
Planck's length and time based on h-bar, all derived values for the mentioned
constants will be wrong! For these reasons, I can say with absolute certainty,
that the Planck's values based on Dirac's h-bar are wrong and any scientific
literature showing otherwise, would better revert them to the h based units, or
at least make sure the readers are aware of the mentioned arguments.
The Spacetime freespace constants & Fine structure constant
In the ST system of units list we can clearly see that ALL physics constants and parameters have spacetime in common. Space and time are inter-related, in that dimension S can be differentiated (observed) by dimension T, and vice versa, depending which dimension is taken as reference by the observer. S and T however can be deduced separately, in conditions where space-time is continuous, that is everywhere, as far as we know. The whole universe can be explained in terms of these two interacting dimensions S and T which have unique values. Note that in this unification theory, unlike what we perceive as human beings, both Space and time have the same number of dimensions, and are both SPATIAL. In such a theory a volume of time T3 with respect to S for an observer in the spatial dimension S, has the same properties of a volume of space S3 with respect to T for an observer in the spatial dimension T. This may sound strange for most of us, because we are used to view the universe with respect to time, and perceive the spatial dimension T only as our temporal dimension. If you cannot grab this concept, do not worry, as you should still be able to understand the main issues. The condition for the universe to exist is that we have TWO such spatial dimensions interacting together. As we say 'It takes two to tango'.
Natural Units (also known as
Planck or God's units) So, as
we have shown in the conversion table, both Mass & Current can be reduced
to space time equivalents with no requirement for any hard particle unit as the
kg. However, one cannot expect to put natural values for S and T in the ST
equivalent of mass and get a result in kg. The kg unit is not a natural unit,
but a fictitious man made unit. It is in fact the last SI base unit to be still
based on a prototype. In 1889, the 1st CGPM sanctioned the international
prototype of the kilogram, made of platinum-iridium, and declared: This
prototype shall henceforth be considered to be the unit of mass. The picture at
the right shows the platinum-iridium international prototype, as kept at the
International Bureau of Weights and Measures under conditions specified by the
1st CGPM in 1889. This is a worrying fact for NIST, and in fact, we found that
resolution 7 of the 21st General Conference on Weight and Measures, had in fact
called for a redefinition of the kilogram, and offered to redefine the kg as The
mass of a body at rest whose equivalent energy equals the energy of a
collection of photons whose frequencies sum to 135639274E42 Hz. Such
redefinition has not yet taken place. In fact all physical units such as
Candela, Joules, heat capacity, etc... where setup to different standards for
historical reasons.
During his lifetime, Planck had derived a set of standard units. As opposed to
the SI standard, these units are based on the natural constants : G
(gravitational constant), h Planck constant, c Speed of light, k
Boltzmann constant and permittivity. They are based on universal constants and
thus known as Planck's natural units. The two basic Planck units can be easily
derived from my ST conversion table as follows:
h = [k]T2/S
c = S/T
G = [1/k]S6/T5
k = kg conversion factor (read following paragraph)
So, h = [k]T/c and G = Tc6/[k]
Gh = T2c5
T = (Gh/c5)1/2
Substituting S=Tc, we get:
S = (Gh/c3)1/2
Natural Length (S) |
(Gh/c3)1/2 |
= 4.051319933E-35 m |
Natural time (T) |
(Gh/c5)1/2 |
= 1.351374868E-43 s |
Knowing the natural values for S and T we can now easily define a conversion
ratio between the ST units and the man-made unit we call the kg. This constant
works out to be equal to (hc7/G)1/2 or kQ=1.469944166E18
and is dimensionless. So :
Mass (kg) = 1.469944166E18 (T3/S3) = kQ (T3/S3) |
This factor has therefore to
be applied to all those units quoting the kg SI unit, for example, for Force
(Newtons) we know that its SI units are kg.m/s2 so to convert the ST
values into Newtons, we have to apply the same conversion equation that we use
for the kg.
The above conversion constant will also be applied to energy. Although the
Kelvin unit in ST has the same dimensions as energy, the conversion constant
for Kelvin is not the same. We know that 11604.499 Kelvin is equivalent to 1eV,
which is equal to 1.602E-19 Joules. One Kelvin is equal to 1.3806E-23 Joules,
where 1.3806E-23 is Boltzmann constant k. It follows that the conversion
ratio from Space Time parameters to Kelvin units is given by
Kelvin (K) = [kQ/k](T/S) .... k=Boltzmann constant |
This factor has therefore to
be applied to all those units quoting the Kelvin SI unit, for example, for
Thermal conductivity we know that its SI units are kg.m/s3/K so to
convert the ST values into SI units, we have to apply factor kQ for
the kg unit and factor [kQ/k]-1 for the Kelvin-1
unit.
The ampere is the next redundant unit introduced in the SI due to lack of
knowledge of the EM nature of matter. This unit is defined as that constant
current which, if maintained in two straight parallel conductors of infinite
length, of negligible circular cross section, and placed 1 meter apart in
vacuum, would produce between these conductors a force equal to 2 x 10-7
Now Natural Current= electron charge per unit time = q/(hG/c5)1/2
= j*(S/T)
Dimensionless conversion factor j= 3.954702562E15. So :
Current (Amps) = 3.954702562E15 (S/T) = j (S/T) |
Derived Planck Units
Using the above calculated unit conversion factors, and the ST conversion
table, we can derive many other natural units and constants.
For Natural length we have: Length = S = 4.05132E-35m = Planck's length,
sometimes also (wrongly) quoted as S/sqrt(2pi) = 1.61624E-35m
For Natural time we have: Time = T = 1.35137E-43 sec = Planck's time,
sometimes also (wrongly) quoted as T/sqrt(2pi) = 5.391E-44 sec
For Natural speed we have: Speed = S/T = 4.05132E-35/1.35137E-43 =
299.79E6m/s = speed of light
For Planck constant or Natural Angular Momentum we have: h = kQ
(T2/S)
h= 1.469944166E18 * ( 1.351374868E-432/ 4.051319933E-35) = 6.626E-34
kg m2/sec
For Gravitational constant G we have G = (1/kQ)(S6/T5)
works out to 6.672E-11 m3/sec2/kg
This time we used 1/kQ since we have kg-1 in the SI units
of G.
Now from units of energy kg m2/s2, we know that the same
constant kQ has to be applied to energy equations. So for energy we
have:
E= kQ (T/S) = 1.469944166E18/ 299.792E6 = 4.9032E9 Joules = Planck
energy.
For Natural mass we have: Mass = kQ(T3/S3)
works out to = 5.456E-8 kg = Planck mass, sometimes also quoted as M/sqrt(2pi)
= 2.17645E-8 kg
For Natural Power we have: Power = kQ(1/S) =
1.469944166E18/4.051319933E-35 = 3.6283E52 Watts = Planck Power
For Natural charge we have: Charge = j (S) = 3.954702562E15
*4.051319933E-35 = 1.602E-19C = electron charge
For Natural current we have: Current = j (S/T) = 3.954702562E15 * c =
1.18559E24 Amps = Planck current
For Natural Temperature we have : Temperature = [kQ/k](T/S)
= (1.469944166E18/1.380662E-23)(1/c) = 3.551344E32 Kelvin
A comprehensive list of
numeric values for all known physical units has been worked out on the next
page.
The FINE STRUCTURE CONSTANT ENIGMA
|
So far so good, all parameters get the exact known natural values by using the derived constants k (for kg unit) and j (for Amp unit). Now for the tricky part: The free space constants. In the SI system of units we note a few units like permittivity, permeability, impedance, conductance, etc... that for some weird reason have the kg as part of their unit. For example permittivity is defined as Amp2.sec4/kg/m3, Impedance = m2kg/sec3/Amp2. Since during the development of the SI system, nobody ever wondered that the kg unit was actually representing a standing wave electromagnetic structure, we see that this unit has been applied also to units which, although represent a volume of 3D energy (T3/S3), are NOT standing waves. The space time dimensions for a 3D outgoing or incoming travelling volume of energy is the same as that of a 3D standing wave, but the conversion constant for the kg in these two cases is different. |
Let us take an example to make everything clear:
We know that Freespace Impedance = 376.73 Ohms ... Radio engineers know this
very well
Now the ST equivalent for Impedance = T2 S-3 and its SI
units are: m2kg/sec3/Amp2
To calculate the natural Impedance, we first put in the natural values for S
and T, then multiply by the kg conversion factor kQ, and divide by
the square of the Amp conversion factor j.
Natural Impedance = 25812.807 Ohms, also known as Von Klitzing constant Rk.
In 1985, a German physicist Von Klitzing was awarded the Nobel Prize for
Physics for his discovery that under appropriate conditions the resistance
offered by an electrical conductor is quantized; that is, it varies by discrete
steps rather than smoothly and continuously.
And here we have got the interesting discrepancy between Natural &
Freespace impedance. This is no mathematical mistake, as we know that both the
freespace impedance and the natural impedance have been experimentally
confirmed under different conditions. This discrepancy comes from the fact that
natural values apply to a standing wave 3D energy structures, whilst freespace
impedance applies to travelling waves as we know.
Working out the ratio Z0/ZNAT = 376.7303/25812.807 =
1/68.518 = 2/137.036
I have found out that the ratio of these two impedances is given exactly by:
Free space Z0 = ZNAT * 2a
Where a is
the well known Fine structure constant, given by = a m .c.e2/(2h)=
1/137.036
From this we deduce that although the SI system does not recognise two types of
kg units (having the same dimensions T3 S-3) we have a
relation between the kg used in 'matter' equations and the kg used in free space
'wave' equations:
kgfreespace/kgmatter = 2a |
|
|
a |
This means that for units
defining a travelling EM volume of energy, the kg conversion constant kQ,
has to be multiplied by 2a. We
will call this new product of constants kF denoting it for free
space EM waves. Thus, for all free space parameters, we have:
kgfreespace = 2.145340167E16 (T3/S3)= kF.(T3/S3)...where kF= 2a kQ |
This sheds light on the
actual significance of the fine structure constant. It is well known that
Alpha, the fine structure constant, which is a dimensionless number, is
difficult to fit into a rational scheme of physics. Max Born stated "There
seems to be little doubt that the existence of this dimensionless number, the
only one that can be formed from e, c and h, indicates a deeper relation
between electrodynamics and quantum theory than the current theories provide,
and the theoretical determination of its numerical value is a challenge to
physics." Richard Feynman (4) writes, "It has been a mystery
ever since it was discovered more than fifty years ago, and all good
theoretical physicists put this number up on their wall and worry about
it".
Now with the aid of the unified ST table, we have a further clue on what alpha
might represent. It measures the strength of the electromagnetic interaction
between incoming and outgoing spherical waves within the structured standing
spherical wave (or matter). It is a ratio of volume of energy between the
travelling spherical waves and the standing wave EM structure. It is worth
noting that the fine-structure 'constant' maintains its value as long as the
entity of matter is at stand still. The effective electric charge of the
electron actually varies slightly with energy so the constant changes a bit
depending on the energy scale at which you perform your experiment. For
example, 1/137.036 is its value when you do an experiment at very low energies
(like Millikan's oil drop experiment) but for experiments at large
particle-accelerator energies (like 81GeV) its value grows to 1/128. This is
not the same as saying that Alpha is not constant. In fact, in April 2004, new and
more-detailed observations on quasars made using the UVES spectrograph on
Kueyen, one of the 8.2-m telescopes of ESO's Very Large Telescope array at
Paranal (Chile), puts limits to any change in Alpha at 0.6 parts per million
over the past ten thousand million years. So we might say that Alpha measured
at zero Kelvin is a constant of exceptional stability. The reason for its
change at high energy levels is that when the standing EM wave starts radiating
heat (EM waves), part of the electron's internal EM energy starts travelling
outwards, and the travelling wave conversion constant kF changes. If
the standing wave is somehow changed all into pure travelling waves, this
constant will increase to unity, and thus kF and kQ will
be equal, and so kgfreespace will be equal to kgmatter.
This is the main reason why forces seem to unify at high energy levels as shown
below:
Fine structure constant is
one of the most wonderful physical constants, a = 1 /
137.036.. The quantity a was introduced into physics
by A. Sommerfeld in 1916 and in the past has often been referred to as the
Sommerfeld fine-structure constant. It splits some spectral lines in hydrogen
atom such that DE = (a/4)2Ei.
In order to explain the observed splitting or fine structure of the energy
levels of the hydrogen atom, Sommerfeld extended the Bohr theory to include
elliptical orbits and the relativistic dependence of mass on velocity. The
quantity a, which is equal to the ratio ve/c
where ve is the velocity of the electron in the first
circular Bohr orbit and c is the speed of light in vacuum, appeared
naturally in Sommerfeld's analysis and determined the size of the splitting or
fine-structure of the hydrogenic spectral lines. a is
simply the ratio of the circumference of the first circular Bohr orbit to the
electromagnetic wavelength of the electron's internal energy E=mec2.
It is the ratio between the two fundamental velocities c the speed (S/T)
of EM energy in free space and ac, the
speed (S/T) in the quantum world. Feynman wrote:
There is a most profound and beautiful question associated with the observed
coupling constant, e the amplitude for a real electron to emit or absorb a real
photon. It is a simple number that has been experimentally determined to be
close to -0.08542455. (My physicist friends won't recognize this number,
because they like to remember it as the inverse of its square: about 137.03597
with about an uncertainty of about 2 in the last decimal place. It has been a
mystery ever since it was discovered more than fifty years ago, and all good
theoretical physicists put this number up on their wall and worry about it.)
Immediately you would like to know where this number for a coupling comes from:
is it related to pi or perhaps to the base of natural logarithms? Nobody knows.
It's one of the greatest damn mysteries of physics: a magic number that comes
to us with no understanding by man. You might say the "hand of God"
wrote that number, and "we don't know how He pushed his pencil."
Let's now consider:
Classical radius of an electron re=2.8179403E-15 m
Bohr radius of an electron, ao=5.29177208E-11 m
Rydberg constant, Ryd=10973731.5685 m-1.
In order to see the relation between each of the above radii and wavelengths we
must express these values in a similar form, for example, as wavelengths or
orbit circumferences:
lclass pre
= 1.77056410E-14 m
lCompton = 2.42631021E-12 m
lBohr pao=
3.32491846E-10 m
lRydberg/2 = 1 / (2Ryd) =
4.55633525275E-08 m
The numerical values for wavelengths clearly show that:
lclass lCompton lCompton lBohr lBohr lRydberg/2 a.
We can also work out the frequencies from f= c/l
fclass = c/2pre = 1.693203E22Hz
fCompton = c/2.42631021E-12 m = 1.23559E20Hz
fBohr = c/2pao= 3.32491846E-10
m= 9.016536E17Hz
fRydberg/2 = c / (2Ryd) = 4.55633525275E-08 m = 6.57968E15Hz
So we have a similar relation for frequencies:
fRydberg/2/fBohr = fBohr/fCompton
= fCompton/fclass = a
Knowing that Energy E=hf, we get the following energy values:
Eclass = hc/2pre = 1.121946E-11
J
ECompton = hc/2.42631021E-12 m = 8.187236E-14 J
EBohr = hc/2pao= 5.974515E-16 J
ERydberg/2 = hc / (2Ryd) = 4.359811E-18 J
ERydberg/2/EBohr = EBohr/ECompton =
ECompton/Eclass = a
We usually define one wavelength of motion around a circle as 2pr, and
one cycle of travelling wave l as going through 2p
radians. However, it is well known that in standing waves, the distance from
node to node at its fundamental resonant frequency does not occur at 2p, but
rather at p. This explains the factor of 2 attached to a. Thus
we can re-write our previous kg units comparison as:
kgtravellingwave/kgstandingwave = a |
..... confirmed above for T3S-3 (3D energy)
ERydberg/2/EBohr = a |
...... 1D Energy form T/S
EBohr/ECompton = a |
..... 2D Energy form T2S-2
ECompton/EClass = a |
..... 3D Energy form T3S-3
From the above we see that the relations for the different energy units of Rydberg, Bohr, Compton and classical orbits obey the same relation as the travelling to standing waves we described previously. Starting from the simplest 1D form of energy T/S denoted by its energy ERydberg, we see that EBohr should represent its standing wave. But we also see that the same standing wave EBohr of this level, becomes the travelling wave of the next level, to create the next higher dimension of standing wave energy ECompton in 2D (on a surface). In turn, ECompton (the photon) becomes the travelling wave of the next level, to create the next dimension standing wave EClassic, the electron! Since photons obey all freespace equations, whilst the electron obeys the natural laws of matter, it means that this dimension level is the same as the 3D energy level T3S-3, and that the previous two, are in fact the 1D and 2D versions of energy. Looking at the ST table, we see that T/S is usually manifested as energy, T2S-2 is manifested as momentum, and T3S-3 as 3D mass, but all three are actually different manifestations of mass or energy.
Notice how the standing wave
of two plane 2D waves can generate a 3D rotating wave. You have to visualise
the blue standing wave as rotating about its axis, in and out of the page. This
would become the travelling wave in the next dimension; 3D. It now becomes
clear that each energy, for example ECompton can exist as a
travelling wave in 3D and also as a standing wave in 2D. This
solves the enigma for the wave-particle duality of light. Light will behave as
a travelling wave in 3D, but will act as a standing wave (perceived as
momentum) when it is projected on a 2D surface, that is, when hitting the
surface of a target or sensor.
Tweaking the a fine
structure constant using common sense
Since there is no theoretical way to derive the exact value for the a constant, this is usually
done experimentally at low energy levels. Current accepted value from NIST
reference is 1/137.03599911(46). But we now know something that most scientists
do not know. We know that matter is the 3D version of electromagnetic energy
T/S, whose structure is made up of elementary energy units connecting the nodes
of their structure. We also know that the ratio of the 3D mass to 1D 'unit
energy' is EClass/ERydberg/2 = a .
Hence we know that the number of EM waves joining the structure of an
elementary matter unit is equal to a and should therefore be an
integer. Taking the present CODATA value we get a = 2573380.53. So at zero
Kelvin, the real value should be higher than this. If we stick to our platonic
fractal structure, we find out that any structure made up of any combination of
platonic shapes will always end up with an even total number of elements, so it
makes sense we select 2573382 as our value for a , which gives us a value for a= 1/137.0360251 which is also
within 1986 CODATA's margin of error, and most important an exact value
theoretically derived for a temperature at absolute zero Kelvin. Note that
since this tweaking method does not involve other parameters such as
gravitational constant, electron charge etc... all of which are known to have
limited accuracy, the value obtained does not suffer from inaccuracies of other
constants as does the NIST derivation. Also, note that in no experiment is a measured directly, but is
always a product of other measured parameters, and always measured above zero
Kelvin. Now the biggest challenge left, is to show which 3D structure of 1D EM
energy units is composed of exactly 2573382 elements.
Source |
Value for a |
CODATA 1986 | |
Michael Wales |
1/137.0359896 (exact) |
KR/VN-1998 | |
LAMPF-1999 | |
CODATA 1999 | |
CODATA 2002 | |
Dr.M.Geilhaupt | |
I. Gorelik & Steven Harris | |
Ing.Saviour Borg |
1/137.0360251 (exact) |
Derivation of Free space constants
I will now reconfirm the above relation between travelling and standing wave
energy factor Alpha by deriving all the free space parameters:
Plugging in these values according to the space-time dimensions given in the
table, we get:
Free space speed = S/T = 299.792458E6 m/s = speed of light
Free space impedance = [kF/j2]*T2 S-3
= 376.7303 Ohms
Free space conductance = S3 T-2/[kF/j2]
= 2.6544E-3 Siemens
Free space permittivity = [j2/kF]*S2 T-1
= 8.854187E-12 F/m
Free space permeability = [kF/j2]*T3 S-4
= 1.256637E-6 H/m
The above values agree with the known values for these parameters and thus
re-confirm the correctness of my ST system of units.
The Universal Limits
Using this unified theory of spacetime units, we find that the calculated units
above, coincide exactly to the well accepted constants found in all
conventional physics textbooks and define free space (at least all free space
that we can account for till now). Since all accepted physics laws conform to
the conversion table, we now have the advantage to go further to deduce some
more interesting data for free space. So is free space (vacuum) a sea
of energy, or can we get a value for the power and frequency we can get from
the so called vacuum energy / ether energy / ZPE / radiant energy? Is there a
limit to the electromagnetic spectrum? Is there a limit to the maximum density
of matter? The answers are positive, and can easily be worked out using the
spacetime conversion for power:
Free space power limit Po= kF /S = 5.2968E50 Watts
Free space electromagnetic frequency limit fo = 1/T = 7.39987E42 Hz
= Planck frequency
Free space grand unification energy limit Eo = kF T/S =
71.56085E6 Joules or 4.466477E17 GeV ...energy where all forces unify
Maximum permissible mass density = kQ T3 S-6 =
8.208E95 kg/m3
Free space Entropy S = [kQ/(kQ/k)] T0 S0
= +1.380662E-23
Free space power is the maximum rate of transfer of energy that can flow
through freespace at any point in space or time. These units clearly show the
existence and values for the upper boundaries for power, EM spectrum frequency,
grand unification energy and mass density anywhere in the universe. Note that kF
relates to the fine structure constant, Planck constant and gravitational
constant. These values are thus relating the quantum relativistic physics of
electromagnetism to quantum gravity.
So, of particular interest is the derivation of the Energy of Unification from
my work, which would also equate to the typical energy of a vibrating string in
string theory:
Eunification (eV) = (2a/e) * sqrt(hc5/G) = 44.66477x1016 GeV |
Special Relativity - Shrinking distances, time dilation, mass
changes
Are these effects real as Maxwell & Einstein thought they are ?
In 1687,
This lead to the idea of relativistic mass, a mass equivalent to gmo,
where mo is the rest mass. This was further developed by Einstein in
his special relavity (SR) theory, which was more or less the public version of
Lorentz & Maxwell's work. SR had introduced for the first time quite
'weird' effects like time dilation and increase in mass of a moving body. One
of the strangest parts of special relativity as we know it today is the
conclusion that two observers who are moving relative to one another, will get
different measurements of the length of a particular object or the time that
passes between two events. Consider two observers, each in a space-ship
laboratory containing clocks and meter sticks. The space ships are moving
relative to each other at a speed close to the speed of light. Using Einstein's
theory, each observer will see the meter stick of the other as shorter than
their own, by the same factor g. This is called length
contraction. Each observer will see the clocks in the other laboratory as
ticking more slowly than the clocks in his/her own, by a factor gamma. This is
called time dilation. This is what special relativity predicts, and although
experimental results seem to agree, everybody still feels that there is
something wrong. Newton's laws became the result of SR equations for the
condition g=1, and as long as the mathematical
predictions were then in perfect agreement with experimental values, everyone
was happy to accept the requirement for such weird effects to be part of
nature, even though no logical explanation was ever found. Although this solved
the discrepancy between theory and experiment, it degraded the scientific laws,
as the correction factor could not be explained in terms of a physical model.
In an attempt to visualise a physical model, I transfered
both
A spherical particle of mass mo leaves the source to reach
its destination, distance S apart in time t. Note that although a point
particle (zero dimensional object) is still accepted in most physics textbooks,
it is an impossibility and cannot be used to define a particle. Its mean
translational velocity is equal to v = S/t. Experimental evidence shows
that this translational velocity can be in the range zero to very close to c,
the speed of light, so geometrically, v can be shown as the projected
shadow of velocity c, which makes at angle of q with v.
And since v<=c, than c must always be the hypotenuse of triangle c,v,a
Also, from Newtonian mechanics, we know that the total KE
is equal to the sum of the body's translational kinetic energy and its
rotational energy, or angular kinetic energy:
Total KE= Translational KE + Rotational KE .... If Vreal is the resultant total
velocity, then
VREAL2 = v2 + Vo2 ....
where v and Vo are the translational and orbiting velocities at a
point in time
This relation shows us that VREAL, v, Vo form a second
right angled triangle, with VREAL being the hypothenuse. The
translational kinetic energy of such a moving particle is equal to 1/2mv2,
where v is the linear velocity of the sphere, that is equal to the straight
line distance S between source and destination, divided by the time t taken to
travel through the whole path. This equation holds very well for
non-relativistic mechanics, but experiments involving particles travelling at
relativistic speeds, show that KE does no longer obey the equation for
translational velocity v=S/t, but shoots up to infinity for v
approaching c. This implies that although we 'see' the particle leave
the source, and reach its destination S metres apart in t seconds, its real
resultant KE is somehow not equal to the calculated translational KE 1/2mv2
or 1/2m(S/t)2. How can this be? This is where Lorentz, and Einstein
had their fatal mistake. They reasoned, ..well, if KE=1/2mv2
is not being followed, and v=S/t, then, the particle's mass must be
changing. As you will see, they were wrong! The mass is not changing at all, it
is the real path of the particle which can no longer be approximated as a
straight line, especially when v approaches c. When one looks
again at the relation for VREAL their mistake becomes obvious - they
assumed a zero rotational KE, that is a null Vo. The object would in
fact be rotating/spinning around the path connecting source to destination, a
helical path being a good example. This is the key to understand relativity.
One could easiely understand how motion about its own axis can actually change
its path length, whilst still reaching its destination point. The distance from
the source to the destination divided by the time taken is due the
translational velocity path limited to c, but the actual helical path divided
by the same time taken results in a much higher velocity, not limited to c. So
at any point in time, the real velocity is in fact travelling at an angle to
the linear velocity v, at a higher speed. We also know that velocity of light
as seen by a particle is totally independent of its real velocity, so the
velocity of the real path taken by the particle is always normal to the
velocity of light. So we know that c is perpendicular to VREAL.
Now we also know that as v tends to zero, angle q tends
to 90 degrees, and VREAL and v become almost equal
meaning that they tend to become parallel to each other. As v tends to c,
angle q tends to zero, and VREAL
and v will approach an angle of 90 degrees to each other, whilst VREAL
will grow infinitely long. This means that the angle between v and VREAL
is equal to 90 - q. Since triangle v,Vreal,Vo
is a right angle triangle, and the angles between vectors c & v, and Vreal
& Vo are equal, then triangles 'c,a,v' is
a similar triangle to trianlge v, VREAL,Vo.
Consequences of the above description:
Lorentz and Einstein were wrong in their interpretation of experimental
results.
The path travelled by a particle can only be approximated as a straight line
either in calculus, or as the mean velocity tends to zero. So strictly speaking
a particle travels in a straight line only at v=0, in other words, a particle
CANNOT travel in a perfect straight line. Nor do electromagnetic waves travel
in a straight line, they only spiral along a line.
For
Although the particle cannot reach its destination before an other particle
which could theoretically cross the path in a straight line at the speed of
light, its REAL VELOCITY along its real path, can exceed by far the speed of
light. Still, its information content in the direction of the 'imaginary'
straight line path cannot travel faster than light. I refer to the straight
line path as imaginary for the reason that nothing is really travelling in this
path, but only spiralling around it.
The velocity of light as seen by the particle real path is totally independent
of the particle's velocities.
Derivation of Lorentz Factor using
Now that we are armed with a
better understanding of the actual velocity components of any moving particle,
we can easiely derive Lorentz factor using the above diagram, by applying
simple geometry!
VREAL2 = v2 + Vo2....
(1) by Pythagoras
VREAL/Vo = c/v .... from similar triangles
Vo = v*VREAL/c .... (2)
Substitiuting for Vo in equation 1
VREAL2 = v2 + (v2/c2)*VREAL2
VREAL2(1-v2/c2) = v2
VREAL = v/ (1-v2/c2)
VREAL = gv.... where g (1-v2/c2)
This means that most mathematics derived by Einstein and Lorentz still hold
true, but have a different meaning, a meaning which unlike time dilation and
distance contraction, does make sense and can be easiely explained by a
physical model of the particles' actual path of travel.
From the above, it follows
that travelling in a straight line is something which nature abhors, and a
perfect straight line travel occurs only at v=0, or in calculus as dS/dt tend
to zero. Also, refering again to our relativity velocity vector diagram, the
resultant real velocity VREAL is made up of two normal vectors, one
of which is v, which points in the same direction joining the source to
destination. So, we know, that VREAL is really the resultant of two
velocity vectors v and Vo which are normal to each other. This might not make
much sense until you follow the helical path diagram which shows how such path
must look to satisfy all the above conditions.
A helical path is one example in which the resultant velocity is made up of two
velocity vectors v & Vo which are always normal to each other at any point
in time. This is a path in which the ratio of VREAL to v is eqaul to
Lorentz factor, resulting in a kinetic energy value which goes to infinity as
the mean velocity v tends to c, but where no distances shrink, no time dilates
and no mass goes to infinity! At low mean velocity v, much less than c, Vo the
orbital velocity of the spiral will be very small, the path will resemble much
to a straight line, and VREAL will be almost eqaul to v and to S/t.
In such a case Newton's laws will give the correct results even if the path is
approximated as a straight line, and angular velocity assumed null. As velocity
increases, the orbiting speed will also increase and VREAL will
increase to superluminal helical velocities, but the magnitude of the mean
linear velocity to its destination will be still less than c. Applying Newton's
laws on a straight path will no longer yield the correct results, because the
angular velocity is no longer negligible. So, the correction factor g is
only required if one totally ignores the angular motion. Once angular velocity
is put into the equation, the correct kinetic energy is obtained.
This model does not exclude superluminal speeds, however it still has the speed
of light limit within its linear part, the velocity which we measure by
measuring the time it takes for a particle to travel from source to
destination. This also clearly explains why we do not see photons along their
travel. We see photons at the radiating source, and at destination, but since
they are superluminal during their helical journey, they are not visible along
their path! Also, it is kind of silly to assume that when a photon is released,
its total KE is only made up of translational KE and totally ignore its
rotational KE. In fact from the above it is obvious that a mass with zero
angular KE is not a mass at all.
From the particle section discussion, we know that matter (defined as having mass) is made up of standing waves. The below picture shows a simple form of matter made up of a helical standing wave. A pair of helical waves is all required to generate matter. All elementary particles should be of this form, yes, whether it is an electron, an atom, a quark or any other newly discovered particle it will be of this form. An electron is such example. The positron is exactly the same but goes backwards in time. All it means is that v,Vo and VREAL point to the opposite directions. The spin is simply v.
For the condition Vo/v= 2, or q=35.264
degrees, we get Vreal = c 2.
Applying Newton's law to find the total internal energy of a particle:
E= 1/2mv2
E= 1/2m [ (2)*c]2
E= 1/2m*2c2
E = mc2
So knowing about the helical path we can derive Einstein's equation directly
from Newton's equation, not the other way round! More important is the fact
that we can finally construct a physical model for ALL matter. Note that the
above standing wave is made up of pure electromagnetic waves, and that the
whole circular helix has the properties of matter. Now, if external energy
(kinetic & rotational) is supplied to this helix, the whole structure will
start moving in a helical path of greater dimensions. The grey entities moving
around the bigger circular helix will thus be the original helices. The bigger
helix will still have the properties of matter, but its standing wave will be
home to a number up of smaller 'particles' which can be made to increase or
decrease in quantity by kicking them with enough energy. This mechanism is the
fundamental mechanism of nuclear theory. All smaller helices within one larger
helix will have exactly the same properties and be similar in size and
frequency. Each helix size will thus exist in a different heirarchy level, with
the lowest level being the smallest helix that is made up of pure
electromagnetic waves, that is with no internal circular helices. The relation
between heirarchy levels is governed by the fine structure constant which
actually defines the maximum speed limits for v and Vo at which lower stage
helices can move around the main circular helix.
The figure above, is a much better scientific explanation of the origin of matter and what one would expect to get when bombarding matter in particle accelerators. It also solves the enigma of the point particles. Nobel laureate Paul Dirac, who developed much of the theory describing the quantum waves of the electron, was never satisfied with the point-particle electron because the Coulomb force required a mathematical correction termed renormalization. In 1937 he wrote, This is just not sensible mathematics. Sensible mathematics involves neglecting a quantity when it turns out to be small not neglecting it because it is infinitely large and you do not want it! [P. A. M. Dirac, Nature, 174, 321, p 572 (1937)]. The below figures is basically what present books teach (not a joke).
The realities of mainstream science
So, we collide two cherries, and get pears, bananas,
apples and all fruit varieties! Of course, with no physical model for matter,
nuclear theory offers a lot of enigmas and surprises to scientists. Once we
start sorting out matter on different heirarchy levels, everything starts to
get more clear. For example, we can now say that all known atoms exist in the
same heirarchy level. They differ only in the number of helical anti-nodes
(protons), their v/Vo ratio (proton to neutron ratio), and the number of
smaller helices moving around (electrons), but they are on the same hierarchy
level. The fact that the number of protons is usually equal to the number of
electrons indicates, that for a stable atom, each structure antinode can handle
one lower hierarchy helix within it. Let's say these higher level structures
are the cherries. Once we collide these two big helices into each other, some
of the lower level(s) helices, get mechanically dislodged and since they are
standing wave circular helices on their own, they will be detected as
independent matter, say pears and bananas. So, why were pears and bananas not
visible in the first place? Simply because their helical velocity is faster
than the speed of light, and anything faster than the speed of light cannot be
detected! The pears and bananas are no longer spiralling around the cherry
structure at superluminal speeds, but have now been kicked off their orbit and
travelling at a much lower velocity resulting after impact. If one splits the
resulting pears, then apples may be detected and the process continues, with
energy levels going up as lower heirarchy levels are approached. The process
continues until plancks energy level is reached, at which point the resulting
outcome of the bombardment will not be a standing wave (detected as matter) but
pure travelling electromagnetic waves at Plancks frequency and energy,
travelling at the speed of light.
Calculating speed limits for v and Vo.
ERydberg/2/EBohr = EBohr/ECompton =
ECompton/Eclass = a=
1/137.036
From E=mc2, we can therefore get the relation in terms of masses:
MRydberg/2/MBohr = MBohr/MCompton =
MCompton/MClassical = a=
1/137.036
This clearly shows that a is nothing but the mass or
energy ratios of a circular helix standing wave to a similar helix of higher
hierarchy level. If we take the lower hierarchy level helix as our 'stationary
mass' Mo, we have:
MREAL = gMo.... where g (1-v2/c2)
...but MREAL= 1/a Mo,
which implies that for sequential hierarchy levels in matter, a g a g (1-v2/c2)
1/137.036= (1-v2/c2)
v=0.999973374c
Also, c a (c2-v2),
that's why I have put c a in the relativity diagram on
top of this page.
The fine structure angle q = ArcSin(a), so:
Fine structure angle q = ArcSin(1/136.036)=
0.418111 degrees. The real superluminal helical path velocity at which the
internal hierarchy levels move within the structure is
VREAL= g v = v/a=
137.036v
VREAL= 137.036*0.999973374c= 137.032c
So, strictly speaking Einsteins equation E=mc2 is not exact since it
assumes that the translational velocity v can reach c, whilst in fact it is
limited to a maximum limit of 0.999973374c. So the exact equation for
energy-mass equivalence is:
E= 0.999946748 mc2
|