The trouble with the password system described below is, what happens if you forget your calculator? That, in fact, is just what I did today and as a result could not enter several of my accounts.
But as I was out driving last night a wunderbar idea hit me. Select a license plate at random and then vary it a bit. The plate is easily remembered but, if not a vanity plate, presents an effectively random string.
In New Jersey, where I've been biding my time of late, the plates are six characters of the form XJG 23P, or, that is, LLLNNL, where L stands for letter and N for numeral.
Just in case an adversary might run all U.S. plates with six characters, we alter the string by adding a number or letter, making one substitution and tossing a coin to decide whether to transpose the two elements above. That is, we might write 23P XKG M or XKG 23P M or, we can place the M as the first or second element.
Even if the adversary discerns the order, as in LLLNNL(L or N), the number of combinations is (26^4 x 100 x 36)/6 = about 274 million. If the order is unknown, the adversary must contend with 36^7 = about 78 billion possibilities.
So, in order to combat identity theft, internet hacking and unwarranted government snooping, we should encourage Americans and indeed everyone to adopt this excellent password technique.
Sunday, April 6, 2008
Saturday, April 5, 2008
Passwords: the weakest security link
I've been laboring under the delusion that passwords composed of words and numbers are relatively effective. Adding numbers to passphrases was recommended in order to defeat the dictionary search algorithm. But, upon reflection, I thought of a variant algorithm to tease out probable words in passphrases.
It goes like this:
Step 1. Search all words, common names and short phrases of length L, which is the length of the passphrase, if known. Even if not known, the maximum password length normally is known, and L can be set to Lmax
Step 2. If Step 1 fails, begin at character 1 and search all words of length L-1 and check all 10 numerals for the last space.
Step 3. If that fails, do the same for L-2, checking all 100 two-digit numbers for the last two spaces.
If need be, proceed to perhaps L-n = 3 (a three-letter word) and check the number combinations to the right, a not unreasonable computer task for six numerals (a million possibilities).
Then reverse the process, checking numbers of m digits followed by words of length n, where m+n = L.
If need be, check all possibilities such as
xx(dictionary word of length L-5)xxx
That is, we check all 100 numbers followed by all dictionary words followed by all 1000 numbers. Here, the number of possibilities rises to 5 x 1010 or 10 billion, which is a bit much. But, that's estimating 500,000 English words. The set of most common words is perhaps 5,000, yielding 500 million combinations. This may be doable, but still, if you must use a word-number combination in your password: break it up! That is, jack77351 is much more vulnerable that 77jack351. In the former case, there are 50 million possibilities. In the latter, 500 million.
Then there are the very long passphrases, such as those recommended by Hushmail.com. The idea is that, with 47 character keys, and 47n goes way beyond computing power for n > about 10.
However, long passphrases may be susceptible to statistical methods. First, we do a regression analysis to separate the dummies from the words and also to separate the random component(s) from the words. We then run a frequency analysis on the word component(s) and voila!, the remainder follows swiftly.
But who can remember long randomly generated strings, which are the best passwords? Well, what about the next best thing? Pseudorandom strings generated by your pocket scientific calculator (but beware, there are pseudorandom techniques that are no good).
That is, it may be better to remember a specific function or two rather than some phrase.
A shorter Yahoo password:
3.7(7.33/7) yields a decimal extension: 26851482. And we can always toss in some non-numerals, as in: b26851482n
A longer Hushmail password:
7.1(171/7 union 3.5(533/5)
we write as +]6423310989950748'+
A bit of "false" symmetry with the plus signs might slow some kind of numeral hunt.
The numerical symmetries in the functions are meant as memory aides. Yet I doubt they would show up very easily.
It goes like this:
Step 1. Search all words, common names and short phrases of length L, which is the length of the passphrase, if known. Even if not known, the maximum password length normally is known, and L can be set to Lmax
Step 2. If Step 1 fails, begin at character 1 and search all words of length L-1 and check all 10 numerals for the last space.
Step 3. If that fails, do the same for L-2, checking all 100 two-digit numbers for the last two spaces.
If need be, proceed to perhaps L-n = 3 (a three-letter word) and check the number combinations to the right, a not unreasonable computer task for six numerals (a million possibilities).
Then reverse the process, checking numbers of m digits followed by words of length n, where m+n = L.
If need be, check all possibilities such as
xx(dictionary word of length L-5)xxx
That is, we check all 100 numbers followed by all dictionary words followed by all 1000 numbers. Here, the number of possibilities rises to 5 x 1010 or 10 billion, which is a bit much. But, that's estimating 500,000 English words. The set of most common words is perhaps 5,000, yielding 500 million combinations. This may be doable, but still, if you must use a word-number combination in your password: break it up! That is, jack77351 is much more vulnerable that 77jack351. In the former case, there are 50 million possibilities. In the latter, 500 million.
Then there are the very long passphrases, such as those recommended by Hushmail.com. The idea is that, with 47 character keys, and 47n goes way beyond computing power for n > about 10.
However, long passphrases may be susceptible to statistical methods. First, we do a regression analysis to separate the dummies from the words and also to separate the random component(s) from the words. We then run a frequency analysis on the word component(s) and voila!, the remainder follows swiftly.
But who can remember long randomly generated strings, which are the best passwords? Well, what about the next best thing? Pseudorandom strings generated by your pocket scientific calculator (but beware, there are pseudorandom techniques that are no good).
That is, it may be better to remember a specific function or two rather than some phrase.
A shorter Yahoo password:
3.7(7.33/7) yields a decimal extension: 26851482. And we can always toss in some non-numerals, as in: b26851482n
A longer Hushmail password:
7.1(171/7 union 3.5(533/5)
we write as +]6423310989950748'+
A bit of "false" symmetry with the plus signs might slow some kind of numeral hunt.
The numerical symmetries in the functions are meant as memory aides. Yet I doubt they would show up very easily.
Thursday, March 6, 2008
Zeno a go go
The post below generated a lively controversy (see comments).
So now we serve up Zeno deluxe:
Consider a measuring stick. We have the segment from points 0 to 1 -- [0,1] -- which is a finite distance. Nevertheless, we can map the entire set of positive integers plus 0 onto this interval.
Now suppose we have an infinitesimal spaceship able to traverse each infinitesimal distance between n and n+1 at some infinitesimal speed. The ship would move at a finite speed from an infinitesimal perspective but that speed would be infinitely slow from our perspective. That is, an infinitesimal observer at point 1 would have to wait forever for the infinitesimal ship to cover the distance of 1 unit.
Now suppose we argue that Earth now is an infinitesimal object located at point 1. In that case, there could be a 0 point that is infinitely far in the past from our perspective but only finitely far in the past from the perspective of a "higher space."
So now we serve up Zeno deluxe:
Consider a measuring stick. We have the segment from points 0 to 1 -- [0,1] -- which is a finite distance. Nevertheless, we can map the entire set of positive integers plus 0 onto this interval.
Now suppose we have an infinitesimal spaceship able to traverse each infinitesimal distance between n and n+1 at some infinitesimal speed. The ship would move at a finite speed from an infinitesimal perspective but that speed would be infinitely slow from our perspective. That is, an infinitesimal observer at point 1 would have to wait forever for the infinitesimal ship to cover the distance of 1 unit.
Now suppose we argue that Earth now is an infinitesimal object located at point 1. In that case, there could be a 0 point that is infinitely far in the past from our perspective but only finitely far in the past from the perspective of a "higher space."
Monday, March 3, 2008
Zeno was right
It's only in the last few decades that the Big Bang theory has been accepted and the cosmos dated as being on the order of 1010 years old.
Prior to the acceptance of that theory, many scientists held that the cosmos had been around forever, though perhaps the problem of entropy might raise a question about that idea.
But now if we measure time in terms of earth years, and consider that each year can be represented as a member of the set N of whole numbers, and assume that time flows in one direction, then we are left with the paradox that we can't be here. That is, infinity never occurs as a compilation of finite steps.
The only way that we can posit an eternal past is to use our present time as origin and extrapolate backward. But, that scenario doesn't account for the usual idea of our current situation as dictated by the convergence of sequences of mechanical causes. If, say, we think of the earth as it is at 1:55 p.m. EST, March 3, 2008, as represented by the intersection of two (or any number) of waves at that time, then those waves, if we assume they move forward in time, can never have arrived in order to cross. [I'm using the wave analogy as shorthand for sets of causes and effects.]
Similarly, quantum theory vindicates Zeno.
We measure the gravitational potential energy of a swing raised to height y as mgy. But, energy is quantized. This means that we cannot raise the swing to any height between 0 and y, but only to heights which incorporate Planck's constant h. That is, there is not a continuous and infinite number of points to which the swing can be raised but a finite number of points (in the trillions, of course).
So what happens to the swing when it passes from point yn to yn+1? I suppose you would say it makes a quantum jump. It exists momentarily at height yn and would not be defined as moving during this brief time interval. Yet it doesn't move in a classical sense when it rises to yn+1 because it can't exist at a height yn < ym < yn+1.
The puzzle is compounded by the fact that the swing is composed of trillions of quantum particles each subject to quantum uncertainty. I suppose we could take the center of mass of the swing to be what is measured. That would rise only to specified heights and that would be construed as "quantum jumping."
Prior to the acceptance of that theory, many scientists held that the cosmos had been around forever, though perhaps the problem of entropy might raise a question about that idea.
But now if we measure time in terms of earth years, and consider that each year can be represented as a member of the set N of whole numbers, and assume that time flows in one direction, then we are left with the paradox that we can't be here. That is, infinity never occurs as a compilation of finite steps.
The only way that we can posit an eternal past is to use our present time as origin and extrapolate backward. But, that scenario doesn't account for the usual idea of our current situation as dictated by the convergence of sequences of mechanical causes. If, say, we think of the earth as it is at 1:55 p.m. EST, March 3, 2008, as represented by the intersection of two (or any number) of waves at that time, then those waves, if we assume they move forward in time, can never have arrived in order to cross. [I'm using the wave analogy as shorthand for sets of causes and effects.]
Similarly, quantum theory vindicates Zeno.
We measure the gravitational potential energy of a swing raised to height y as mgy. But, energy is quantized. This means that we cannot raise the swing to any height between 0 and y, but only to heights which incorporate Planck's constant h. That is, there is not a continuous and infinite number of points to which the swing can be raised but a finite number of points (in the trillions, of course).
So what happens to the swing when it passes from point yn to yn+1? I suppose you would say it makes a quantum jump. It exists momentarily at height yn and would not be defined as moving during this brief time interval. Yet it doesn't move in a classical sense when it rises to yn+1 because it can't exist at a height yn < ym < yn+1.
The puzzle is compounded by the fact that the swing is composed of trillions of quantum particles each subject to quantum uncertainty. I suppose we could take the center of mass of the swing to be what is measured. That would rise only to specified heights and that would be construed as "quantum jumping."
Wednesday, December 26, 2007
A recursion function for square roots
OK. This post is NOT a big deal. I posted a previous version on Angelfire several years ago and, after reviewing it last night, decided to polish it up just a tad.
The recursion function
Xn = (Xn-1 + C) + C
is a special case of the continuing fraction
which, taken to infinity, equals (C + (C2 + 4)0.5)/2.
The recursive equals that continuing fraction when X0 = 0.
However, we point out that any real, other than -C, can be used and the result in the infinite limit is a single value.
To see this, we write three iterations of the recursive thus:
((((X0 + C)-1 + C) + C)-1) + C)-1
And if you write it out in traditional form using 1/z rather than z-1, it becomes obvious that
limn->inf X0/f(X) = 0, where f(X) is the recursive function.
And note that if X0 =/= Y0, then at any step n of the recursive, the values are not equal. There is equality only in the infinite limit.
An example for C = 2, n = 4.
limn->inf Xn = 1/2(2 + 80.5) = 1 + 20.5
The recursion function
Xn = (Xn-1 + C) + C
is a special case of the continuing fraction
C + 1
----------------
C + 1
------------
C + 1
--------
C + ...
which, taken to infinity, equals (C + (C2 + 4)0.5)/2.
The recursive equals that continuing fraction when X0 = 0.
However, we point out that any real, other than -C, can be used and the result in the infinite limit is a single value.
To see this, we write three iterations of the recursive thus:
((((X0 + C)-1 + C) + C)-1) + C)-1
And if you write it out in traditional form using 1/z rather than z-1, it becomes obvious that
limn->inf X0/f(X) = 0, where f(X) is the recursive function.
And note that if X0 =/= Y0, then at any step n of the recursive, the values are not equal. There is equality only in the infinite limit.
An example for C = 2, n = 4.
limn->inf Xn = 1/2(2 + 80.5) = 1 + 20.5
X0 = 1
3.0
2.333...
2.428571429
2.411764706
2.414634146
X0 = 1/2
2.5
2.4
2.41666...
2.413793103
2.414285714
X0 = 31
33.0
2.0333...
2.492537313
2.401197605
2.416458853
X0 = -31
-29.0
1.965517241
2.508771930
2.398601399
2.416909612
X0 = 1/31
2.032258065
2.492063492
2.401273885
2.416445623
2.413830955
X0 = -1/31
1.967741935
2.508196721
2.398692810
2.416893733
2.413754228
Wednesday, November 14, 2007
Probability of intelligent design
Draft 4
How might we calculate the probability that a very primitive organism occurs randomly? Herewith is an approach:
Let us grant that any organism is a machine that changes a set of inputs into a set of outputs.
In that case such an organism can be modeled by either a Turing machine or Boolean circuit (it is easy to show that Turing machines require Boolean logic gates).
In that case we can choose some economical numbering scheme for any Turing machine or Boolean circuit. Perhaps we'll use Godel numbering as a bookkeeping measure to keep track of the order of the gate numbers. So we can string a sequence of gate numbers together as one number that represents the organic machine.
Supposing that we are using binary numbers, the probability of any one string occurring randomly (with each digit independent and equiprobable) is 2-m, where m is the length of the digit string.
Now the probability that a particular machine does not occur is given by the complement 1 - 2-m. So then the probability that the machine's string does not occur in a string of n digits is given by (1 - 2-m)n.
Of course, perhaps the gates are not equiprobable and yet our method only varies probability by string length. However, there must be in that case some primitive components of the gates that can be written as simple digit strings where 0 and 1 are equiprobable (there seems no good reason not to make those digits equiprobable in virtually all cases).
Using an artificially low example, a string of 2-16 digits has a probability of occurring in a string of 106 digits of 0.999999764, or virtually certain.
But the ratio 2-16/106 = 1.526 x 10-11 does not seem unreasonable on the face of it. In fact, one might expect much lower ratios.
Of course by abstracting out all the physical data, we may well be overlooking an important consideration.
Still, once physical factors are taken into account, one would expect this approach to be useful.
An interesting concept: Take the typical information value of a virus code to obtain a string of n bits. Then establish a software program that concocts strings of length n randomly. We grant the program considerable computational power -- perhaps it uses a network of parallel processors -- and determine how many bits per second it can process. Without actually running the program, we can then determine the probability that a particular virus code will emerge randomly in some specific time interval.
(We could notionally add something to each string that allows it to "escape" into the internet. However, only a virus string will continue past the first receiever.)
A possible physical approach: atoms of each element have a probability of combining with atoms of another over some temperature range and depending on density of each in some volume. Compound molecules likewise. These atoms and molecules can be numbered, according to the scheme outlined, and probability of a primitive cell might be worked out.
Of course, we should posit a plausible environment: sufficient heat and a good mix of chemicals in a small enough volume over a large enough time. In principle, one should be able to test whether the probability is acceptable or not for life to emerge from an inorganic stew.
I realize that probabilities have been used to try to assess such a happenstance. And complexity theory has been developed in large part to address the issue of emergent order (negative entropy). In other words, emergent order implies that in some cases conditional probabilities are to be used. However, when a sample size is large enough, we can often ignore conditional probabilities and treat events as independent, as the central limit theorem tends to show.
Emergent order implies that conditional probabilities become predominant at the level of pattern recognition (emergence). However, before that level, the component events can usually be treated independently, if not as equiprobable.
Some numbers
Cray's Red Storm supercomputer has exceeded the 1 terabyte per second mark and Cray talks of a bandwidth capacity of 100 terabytes per second.
If we guess that a string of code that not only self-replicates but self-defensively morphs must have a minimum of 256 bits and put to work a 100 terabyte-per-second processor spitting out binary strings of 256 digits, we obtain:
1014(23) = 8 x 1014 bits per second versus 2256 = 1.16 x 1077.
The probability that the specific string will be generated randomly corresponds to an average time interval between hits of 1.45 x 1066 seconds, or 4.59 x 1059 years, with the age of the cosmos being on the order of 1010 years.
How many strings of length 256 would replicate and morph? It's hard to believe the number would exceed 106 -- because various substrings follow specific permutations in all cases -- and the difference in probability is negligible.
What is design?
But, when we detect a computer virus, why do we infer design? Because we know that such code IS designed non-randomly. We can say that in 99.999etc. percent of cases investigation reveals a human agent. So we have a frequency or assumed frequency for a nonrandom cause for comparison.
Similarly, we may compare a digit string of length n to "common" strings of that length. For example, if we see the first nine digits of pi"s decimal extension, we compare the randomness probability of 10-9 with the fact that that sequence is associated with a set of human-discovered algorithms. We can guess that the number of practical algorithms for producing nine-digit sequences is low by comparison with 109. (By practical, we mean an algorithm that is used often in math, science or engineering.)
Now when it comes to an organism at stage 0 of evolution, we may find an amazingly low probability of random occurrence. But we should not ask WHY SHOULD such an event have occurred but rather WHY NOT? If a grammatical (functioning) logic gate string is one of a set of equiprobable strings, then, absent further information, there is no reason to deny that the string occurred randomly. On the other hand, if we ask, "what is the chance that such an event WILL happen," the probability tells us not to hold our breath waiting.
Now modeling a stage 0 organism as a machine represented with a blueprint of logic gates, we may ask, when is a machine designed rather than a consequence of random forces? A machine is determined to have been designed if a human-devised algorithm for its construction can be found and if experience shows that the system rarely if ever is seen naturally. Suppose we camouflage the machine such that immediate sensory clues are of little importance. So we examine the machine's operation and draw up a description, using appropriate symbolism, to see whether the description matches some subset of known human-derived algorithms or blueprints. If so, we infer design based on the higher frequency of designed machines to randomly generated machines.
So then, in order to infer design of a stage 0 organism, we would need to have available at least one human-discovered formula. But if we had that, it would radically alter the entire debate.
So how would a higher power design a stage 0 organism? No answer is given since, perhaps on the assumption that the human mind could not fathom such a design. That may be true. However, one cannot show a strong probability of design without a reference base, and there are no divine blueprints that can be examined for comparison.
It should be noted that if it were argued that lifeforms had emerged spontaneously on several occasions at a rate higher than what would be expected randomly, then we would have a basis for believing that a non-random force was at work within some confidence interval. However, one emergence gives little basis for drawing conclusions.
That said, one may agree that the apparent startlingly low probability associated with life's origin may well stir much inner reflection.
How might we calculate the probability that a very primitive organism occurs randomly? Herewith is an approach:
Let us grant that any organism is a machine that changes a set of inputs into a set of outputs.
In that case such an organism can be modeled by either a Turing machine or Boolean circuit (it is easy to show that Turing machines require Boolean logic gates).
In that case we can choose some economical numbering scheme for any Turing machine or Boolean circuit. Perhaps we'll use Godel numbering as a bookkeeping measure to keep track of the order of the gate numbers. So we can string a sequence of gate numbers together as one number that represents the organic machine.
Supposing that we are using binary numbers, the probability of any one string occurring randomly (with each digit independent and equiprobable) is 2-m, where m is the length of the digit string.
Now the probability that a particular machine does not occur is given by the complement 1 - 2-m. So then the probability that the machine's string does not occur in a string of n digits is given by (1 - 2-m)n.
Of course, perhaps the gates are not equiprobable and yet our method only varies probability by string length. However, there must be in that case some primitive components of the gates that can be written as simple digit strings where 0 and 1 are equiprobable (there seems no good reason not to make those digits equiprobable in virtually all cases).
Using an artificially low example, a string of 2-16 digits has a probability of occurring in a string of 106 digits of 0.999999764, or virtually certain.
But the ratio 2-16/106 = 1.526 x 10-11 does not seem unreasonable on the face of it. In fact, one might expect much lower ratios.
Of course by abstracting out all the physical data, we may well be overlooking an important consideration.
Still, once physical factors are taken into account, one would expect this approach to be useful.
An interesting concept: Take the typical information value of a virus code to obtain a string of n bits. Then establish a software program that concocts strings of length n randomly. We grant the program considerable computational power -- perhaps it uses a network of parallel processors -- and determine how many bits per second it can process. Without actually running the program, we can then determine the probability that a particular virus code will emerge randomly in some specific time interval.
(We could notionally add something to each string that allows it to "escape" into the internet. However, only a virus string will continue past the first receiever.)
A possible physical approach: atoms of each element have a probability of combining with atoms of another over some temperature range and depending on density of each in some volume. Compound molecules likewise. These atoms and molecules can be numbered, according to the scheme outlined, and probability of a primitive cell might be worked out.
Of course, we should posit a plausible environment: sufficient heat and a good mix of chemicals in a small enough volume over a large enough time. In principle, one should be able to test whether the probability is acceptable or not for life to emerge from an inorganic stew.
I realize that probabilities have been used to try to assess such a happenstance. And complexity theory has been developed in large part to address the issue of emergent order (negative entropy). In other words, emergent order implies that in some cases conditional probabilities are to be used. However, when a sample size is large enough, we can often ignore conditional probabilities and treat events as independent, as the central limit theorem tends to show.
Emergent order implies that conditional probabilities become predominant at the level of pattern recognition (emergence). However, before that level, the component events can usually be treated independently, if not as equiprobable.
Some numbers
Cray's Red Storm supercomputer has exceeded the 1 terabyte per second mark and Cray talks of a bandwidth capacity of 100 terabytes per second.
If we guess that a string of code that not only self-replicates but self-defensively morphs must have a minimum of 256 bits and put to work a 100 terabyte-per-second processor spitting out binary strings of 256 digits, we obtain:
1014(23) = 8 x 1014 bits per second versus 2256 = 1.16 x 1077.
The probability that the specific string will be generated randomly corresponds to an average time interval between hits of 1.45 x 1066 seconds, or 4.59 x 1059 years, with the age of the cosmos being on the order of 1010 years.
How many strings of length 256 would replicate and morph? It's hard to believe the number would exceed 106 -- because various substrings follow specific permutations in all cases -- and the difference in probability is negligible.
What is design?
But, when we detect a computer virus, why do we infer design? Because we know that such code IS designed non-randomly. We can say that in 99.999etc. percent of cases investigation reveals a human agent. So we have a frequency or assumed frequency for a nonrandom cause for comparison.
Similarly, we may compare a digit string of length n to "common" strings of that length. For example, if we see the first nine digits of pi"s decimal extension, we compare the randomness probability of 10-9 with the fact that that sequence is associated with a set of human-discovered algorithms. We can guess that the number of practical algorithms for producing nine-digit sequences is low by comparison with 109. (By practical, we mean an algorithm that is used often in math, science or engineering.)
Now when it comes to an organism at stage 0 of evolution, we may find an amazingly low probability of random occurrence. But we should not ask WHY SHOULD such an event have occurred but rather WHY NOT? If a grammatical (functioning) logic gate string is one of a set of equiprobable strings, then, absent further information, there is no reason to deny that the string occurred randomly. On the other hand, if we ask, "what is the chance that such an event WILL happen," the probability tells us not to hold our breath waiting.
Now modeling a stage 0 organism as a machine represented with a blueprint of logic gates, we may ask, when is a machine designed rather than a consequence of random forces? A machine is determined to have been designed if a human-devised algorithm for its construction can be found and if experience shows that the system rarely if ever is seen naturally. Suppose we camouflage the machine such that immediate sensory clues are of little importance. So we examine the machine's operation and draw up a description, using appropriate symbolism, to see whether the description matches some subset of known human-derived algorithms or blueprints. If so, we infer design based on the higher frequency of designed machines to randomly generated machines.
So then, in order to infer design of a stage 0 organism, we would need to have available at least one human-discovered formula. But if we had that, it would radically alter the entire debate.
So how would a higher power design a stage 0 organism? No answer is given since, perhaps on the assumption that the human mind could not fathom such a design. That may be true. However, one cannot show a strong probability of design without a reference base, and there are no divine blueprints that can be examined for comparison.
It should be noted that if it were argued that lifeforms had emerged spontaneously on several occasions at a rate higher than what would be expected randomly, then we would have a basis for believing that a non-random force was at work within some confidence interval. However, one emergence gives little basis for drawing conclusions.
That said, one may agree that the apparent startlingly low probability associated with life's origin may well stir much inner reflection.
Saturday, October 27, 2007
The Jordan curve theorem holds for wild curves
Draft 3
Assertion
The Jordan curve theorem holds for a set of non-visualizable, "wild" curves that can be posited in accord with the Well-Ordering Principle.
Prefatory remarks
An intuitive idea of the well-ordering of the reals can be had by considering the following:
We have that any infinite digit string of 0s and 1s may represent a real. Now a string of length n digits may be ordered from least to greatest in 2n ways, with the awareness that any digit after n is a 0.
This denumerable ordering holds for any finite n. However, denumerable ordering does not hold for the entire set 2N of course. But the Well-Ordering Principle can be interpreted to mean that the set of string permutations is precisely ordered at 2N.
We can obtain the set of all curves, including wild non-differentiable and non-fractal-like curves thus:
Orienting ourselves on the x-y grid, using Quad I for convenience, we can arbitrarily assign any y height to any x value in the interval, say, [1,2]. By this, two neighboring x points can have wildly different heights, though there would still exist a slope for them. But, by the Well-Ordering Principle, there must exist two points that are precisely ordered and that have 0 distance between them. These two points will have no slope and constitute a wild, non-visualizable curve "section." That is, we have a situation where there is a y height for every real but no slope for the curve anywhere, even though y does not equal 0.
Though the area under such a curve must be less than 1*ymax, we may find it difficult to evaluate this integral or even give a good approximation by numerical methods.
To complete the set of all planar curves, we mention fractal and fractal-like curves, which are discussed briefly below.
Proof
We form what I call a molecule, or bubble, with the understanding that such an entity may not be visualizable with a drawing.
We may define an origin-based molecule as r = cosx where, with a well-ordering of the set of radians, r is any finite length. Additionally, r(x) = cosx is a relation with 2n-1 values, accounting for "fingers" -- any of which may be infinitely short -- and "interior molecules" beyond the neighborhood of origin. An interior bubble is defined in the same way as a basis bubble, except that its relation r' = cosx requires that for every value of x, r'(x) is less than r(x) and falls between origin and r(x). (Note: an interior bubble is defined by the relation and is not considered an a priori figure here.)
This will suffice to describe any n-loop; i.e., simple loop or pretzel, though we must shortly consider some sets.
[To help one's mental picture, we can proceed thus after forming a basis molecule whereby there are no fingers or holes. We form a second molecule, which may or may not be congruent, and map it onto the first by arranging a translation of coordinates, the effect of which is to intersect the two bubbles such that they share at least two points. All points other than the join points are then erased. The join points are those which do not intersect only a bubble's points.
[We can orient a bubble any way we like and add bubbles to bubbles to our heart's content. We may get a simple loop or an n-loop pretzel.
[A pretzel may appear when two or more bubbles intersect and the intersection set is construed to be "empty" or "background." If a pretzel hole appears, then the intersection obeys the relation requirements of a molecule. A single-hole pretzel is defined by the relation s(1) and s(2) each have one value and there is a subset for which there are exactly four distinct values of s(x).]
Now the interval [r(x)low,r(x)high] is (ignoring fractals), a finite line segment. So we regard those two values as the end points of the line segment. We then require a Dedekind cut between such an end point and the end point of the corresponding, coinciding half-line. In the case of fingers and pretzel holes, we have rather than a half-line, another line segment, of course.
Now the set of such Dedekind cuts maps as a continuous curve about a finite area with no end points. Suppose the boundary curve had two end points. In that case the relation r would have 0 or 2n values, a contradiction.
So, with respect to a molecule, we have that a point on the plane is either an element of the figure's Dedekind boundary set, an element of a line segment r = cos(x) such that there are 2n-1 values of r(x) (not including origin), or neither. So then, the Jordan curve theorem is proved for molecules -- n-loops -- with continuous or wild curves. (We have not bothered with the matter of nested sets of interior loops, which clearly follows from the preceding.)
In the matter of fractal, or fractal-like, curves, it is plain that a fractal construction at any finite step n is composed of a set of self-similar bubbles of diminishing size. Clearly our proof holds for any such step. By the way, we can see immediately that we can apply, at least notionally, a wild curve to a fractal, giving us a whole new set of fractals.
In the case of a fractal, the non-trivial fractal "slopes," though non-differentiable, take on infinitesimal form in the infinite limit, but what form does a wild fractal curve take? The wild part has 0 slope. So whether the curve can be said to exist in the algorithmic limit must be determined from consideration of axioms. At this point, I am content to point out such a situation.
The Jordan curve theorem also applies to any curve of infinite length that is found at, above or below the x axis. We have the relation r(x) which may have 2n-1 distinct values. We then follow the arguments above and have that a point is found in a Dedekind boundary, between a Dedekind boundary point and r(a)2n-1 or not in the Dedekind boundary but above r(a)max or below r(a)min.
Assertion
The Jordan curve theorem holds for a set of non-visualizable, "wild" curves that can be posited in accord with the Well-Ordering Principle.
Prefatory remarks
An intuitive idea of the well-ordering of the reals can be had by considering the following:
We have that any infinite digit string of 0s and 1s may represent a real. Now a string of length n digits may be ordered from least to greatest in 2n ways, with the awareness that any digit after n is a 0.
This denumerable ordering holds for any finite n. However, denumerable ordering does not hold for the entire set 2N of course. But the Well-Ordering Principle can be interpreted to mean that the set of string permutations is precisely ordered at 2N.
We can obtain the set of all curves, including wild non-differentiable and non-fractal-like curves thus:
Orienting ourselves on the x-y grid, using Quad I for convenience, we can arbitrarily assign any y height to any x value in the interval, say, [1,2]. By this, two neighboring x points can have wildly different heights, though there would still exist a slope for them. But, by the Well-Ordering Principle, there must exist two points that are precisely ordered and that have 0 distance between them. These two points will have no slope and constitute a wild, non-visualizable curve "section." That is, we have a situation where there is a y height for every real but no slope for the curve anywhere, even though y does not equal 0.
Though the area under such a curve must be less than 1*ymax, we may find it difficult to evaluate this integral or even give a good approximation by numerical methods.
To complete the set of all planar curves, we mention fractal and fractal-like curves, which are discussed briefly below.
Proof
We form what I call a molecule, or bubble, with the understanding that such an entity may not be visualizable with a drawing.
We may define an origin-based molecule as r = cosx where, with a well-ordering of the set of radians, r is any finite length. Additionally, r(x) = cosx is a relation with 2n-1 values, accounting for "fingers" -- any of which may be infinitely short -- and "interior molecules" beyond the neighborhood of origin. An interior bubble is defined in the same way as a basis bubble, except that its relation r' = cosx requires that for every value of x, r'(x) is less than r(x) and falls between origin and r(x). (Note: an interior bubble is defined by the relation and is not considered an a priori figure here.)
This will suffice to describe any n-loop; i.e., simple loop or pretzel, though we must shortly consider some sets.
[To help one's mental picture, we can proceed thus after forming a basis molecule whereby there are no fingers or holes. We form a second molecule, which may or may not be congruent, and map it onto the first by arranging a translation of coordinates, the effect of which is to intersect the two bubbles such that they share at least two points. All points other than the join points are then erased. The join points are those which do not intersect only a bubble's points.
[We can orient a bubble any way we like and add bubbles to bubbles to our heart's content. We may get a simple loop or an n-loop pretzel.
[A pretzel may appear when two or more bubbles intersect and the intersection set is construed to be "empty" or "background." If a pretzel hole appears, then the intersection obeys the relation requirements of a molecule. A single-hole pretzel is defined by the relation s(1) and s(2) each have one value and there is a subset for which there are exactly four distinct values of s(x).]
Now the interval [r(x)low,r(x)high] is (ignoring fractals), a finite line segment. So we regard those two values as the end points of the line segment. We then require a Dedekind cut between such an end point and the end point of the corresponding, coinciding half-line. In the case of fingers and pretzel holes, we have rather than a half-line, another line segment, of course.
Now the set of such Dedekind cuts maps as a continuous curve about a finite area with no end points. Suppose the boundary curve had two end points. In that case the relation r would have 0 or 2n values, a contradiction.
So, with respect to a molecule, we have that a point on the plane is either an element of the figure's Dedekind boundary set, an element of a line segment r = cos(x) such that there are 2n-1 values of r(x) (not including origin), or neither. So then, the Jordan curve theorem is proved for molecules -- n-loops -- with continuous or wild curves. (We have not bothered with the matter of nested sets of interior loops, which clearly follows from the preceding.)
In the matter of fractal, or fractal-like, curves, it is plain that a fractal construction at any finite step n is composed of a set of self-similar bubbles of diminishing size. Clearly our proof holds for any such step. By the way, we can see immediately that we can apply, at least notionally, a wild curve to a fractal, giving us a whole new set of fractals.
In the case of a fractal, the non-trivial fractal "slopes," though non-differentiable, take on infinitesimal form in the infinite limit, but what form does a wild fractal curve take? The wild part has 0 slope. So whether the curve can be said to exist in the algorithmic limit must be determined from consideration of axioms. At this point, I am content to point out such a situation.
The Jordan curve theorem also applies to any curve of infinite length that is found at, above or below the x axis. We have the relation r(x) which may have 2n-1 distinct values. We then follow the arguments above and have that a point is found in a Dedekind boundary, between a Dedekind boundary point and r(a)2n-1 or not in the Dedekind boundary but above r(a)max or below r(a)min.
Subscribe to:
Posts (Atom)