*From: Dean Brooks
Location: Vancouver, Canada
Date: 07/06/2009*

Hi Greg,

I'm a longtime fan, particularly of Eon and Anvil of Stars. I've got a book draft I'd like to talk you into reading. It's a very ambitious project in applied probability, very relevant to many of your interests.

The basic thesis of the book is that ever since Newton and Pascal laid down the classic theorems of probability theory, we've been on a subtly but critically wrong path. If we had to reduce my theory to a single topic sentence, it would be: Rare events become rarer in logarithmic proportion with increasing set size, because entropy increases.

Over the course of my research, I've had close to a decade of experience in suggesting to people that classical probability is wrong, and talking about my work. I can class the reactions I get into four groups:

1. "No way. The binomial theorem, the central limit theorem, the principle of independent events, cannot be wrong. It is a metaphysical impossibility. They are central to scientific reasoning. Half of everything we know would come crashing down if they were wrong. So you must be a tinfoil-hat-wearing crackpot."

2. "I can see the possibility of it, sort of like the way that Newtonian physics was replaced by special relativity. We obviously aren't grossly wrong, but we could be subtly wrong, and even a subtle difference would be conceptually very important. It might not change how I do sampling, or give me a surefire way to beat the house in Vegas, but it would be very enlightening on a broader, conceptual level to have a new theory of probability."

3. "I can't speak for people in other fields, but in my field, we've had to give up using classical probability almost entirely. For every serious problem we try to attack, we find ourselves dealing with highly skewed data sets, non-stationary distributions, "fat tails," and other biases that classical probability simply can't cope with. I hesitate to claim that classical probability is completely broken, because it seems like every other major field still swears by it. Even in my field, we still tend to cite measures of significance and include pro forma references to classical statistics, as if they really mean something. But among my colleagues, in private, there's a growing conviction that we need something better. I'd take a close look at what you've got."

4. "Right on, man! I've known ever since I first encountered quantum mechanics that there was something screwy about classical probability. They couldn't both be right, not at the same time. But good luck getting the scientific establishment to listen to you ..."

These are basically the four canonical positions. The response I get when I talk about my research sorts out into a few 1's and a few 4's, more 2's, and a shocking number of 3's. It turns out that in field after field, there are major anomalies going back decades that seem to contradict classical probability -- clear evidence of spooky interaction at a distance, for lack of a better term, among the elements of a large and widely distributed system. I have collected several hundred such anomalies into a 650-page book, and laid out a method for explaining and dealing with them.

For the classical games of chance (dice, cards, random number generation algorithms) my method predicts *almost* the same answer, but not quite, and this is confirmed by experiment. The bias is like the difference between a circular orbit and a slightly elliptical one. One of the very neat side benefits of the new method is that I can now explain a century of failed ESP experiments as being due to systematic and incurable bias in the random-number-generating apparatus and test method. Many people suspected this already, but the form of the bias is highly intriguing in its own right.

For real-world problems, particularly the behavior of large social groups, the differences are often very large, and shocking, and powerful. There is a connectedness at work in large sets of random events. Set size matters, in the same novel fashion, in dozens of different fields from criminology and macroeconomics to military history and pop culture.

My work is based on a seminal paper written by Edwin Jaynes back in 1957, on the principle of maximum entropy. There are a number of other people doing work on maximum entropy, but their approaches are all confined to particular subject areas. They're very conservative, only producing papers about macroeconomics or hurricane prediction or some other specialized topic. My aim is to show that probability theory is wrong across the board and that it has to be replaced wholesale with Jaynes' Bayesian maximum entropy rules.

Of particular interest to you, I think, is my very new and radical take on how epidemics spread, as well as what causes punctuated equilibrium. It is central to the sort of large-scale system changes that most of your books are about -- I can give you a totally fresh take on the problems you tackled in Blood Music, in Darwin's Radio, and so on.

This includes rare events like mutation, infection, or mortality from infection. Set size matters. In fact, set size appears to be the only thing that matters, which is very, very spooky. If the system is complex enough, the decline tends to take one of a handful of specific forms -- usually a power law curve based on total numbers in the system.

Thus for a whole range of infectious diseases, mortality drops by the same percentage as the size of the epidemic grows. Increase epidemic size by a factor of 10, mortality falls by approximately half. This is true of Ebola, AIDS, avian flu, H1N1, plague, smallpox, you name it. I have official WHO data to back this up, but you don't need me to supply it, it's all online. The increase in latent period for AIDS, which reciprocally decreases mortality, is closely correlated in power-law fashion with the size of the epidemic. So is the transmission rate.

You might think this is the sort of thing people would already have noticed. They have. Most of these anomalies have been known for decades. Back in the 19th century. William Farr produced a lot of graphs with relationships of this kind. But when he died, no one took up the idea again. Analysis focused on specific risk groups, not risk in relation to a changing total epidemic size. From what I can tell, Farr's kind of analysis hasn't been done in over a century.

I don't have any academic or governmental affiliation. The research is strictly a private venture at this point. Although I do plan to submit to peer-reviewed journals at some point, my hopes for the book's success rest mainly on word of mouth among scientific opinion leaders. I'm submitting this as a public post rather than a private one in the hopes I can intrigue onlookers who will then help persuade you.

Many thanks for your time and attention.

Dean Brooks

President, Ekaros Analytical Inc.

**From: Greg Bear**

Date: 07/14/2009

Hello, Dean! Thanks for the beginning of a fascinating discussion. I've long wondered about the longterm prospects of our current take on probability theory. I'll post this and see what the response is from others. Your take sounds promising--and the key word, Bayesian, is certainly current these days. (Vernor Vinge knows more about it than I do--he first introduced me to it!)

Comments? Let the debate begin!

*From: Jim Hess
Location: Irvine, CA
Date: 07/15/2009*

Intriguing: Big claims but thin evidence.

His website has a link for his Statistical Publications which may provide enough information to begin an evaluation:

http://www.ekaros.ca/

*From: Scott Maasen
Location: Springfield, MO
Date: 07/15/2009*

Okay I'll bite. Can you be more specific on what causes the shift in probabilities? I could be convinced there is a flaw in standard theory, but there would have to be some reason for it. Standard probability rules make logical sense to me in at least when applied to simple systems. I could believe the reality is different, but not without there being a reason for it.

Flip a coin nine times and each time it comes up heads. On the tenth flip most people would have a strong belief that the odds of it being heads again were below 50 percent. I would still go with the 50 percent odds if I was betting, even though my gut instinct would try to tell me otherwise.

*From: Bill Goodwin
Location: Los Angeles, CA
Date: 07/16/2009*

I'm a layman and don't even belong in this discussion--if my comments make sense at all they are very likely old and tedious.

But the subject of probability interests me because of the way it seems (to me) to simultaneously avoid, and touch upon, the role of consciousness in unfolding a personal universe from the one we "know." Am I right in thinking entropy has no place in the "block time" in which classical physics takes place? One doesn't hear it put that way, but you see what I mean. A sugar-cube (for example) is more "orderly" than a dissolved solution really only in its appeal to the rational mind--the "pointyness" of Time's Arrow still begs a subjective experience that is not much addressed. The idea of information implies someone who is informed, or at least I've always thought so. Probability-theory seems a sort of lip-service to the presence of a time-bound observer (who cannot forsee outcomes), while postponing the problem of whether we live in a deterministic, or stochastic, universe by treating an epistomological conundrum as a merely mathematical one. Mr. Brook's ideas, to my untrained ear, hint at an erosion, or outright assault, on this taboo...I'm wondering if that is part of his thinking or only my reaction to it.

My grandfather, Francis Wadley, was an entomologist and statistician who designed experiments for the Department of Agriculture. His field was probit (probability unit) analysis; "Wadley's Problem," regarding dose-response experiments where the distribution is poisson rather than binomial, was named for him. As a young man he delivered milk by horse-drawn wagon; he died shortly after Apollo 11. Maybe the intrigue I feel here is genetic. I remember him teaching me the parts of the insect body...I wish I had his mind and training. In any case, a fascinating thread.

*From: patrick
Location:
Date: 07/16/2009*

Sounds hip. I lack any 'professional' scientific/mathematical expertise, but I've long felt there to be a lack of large-scale grasp of things very simply because of the still-animal brains that even many in scientific fields appear to be. To be human is already to be limited. Hah. But things progress.

*From: Dean Brooks
Location: Vancouver, Canada
Date: 07/16/2009*

Hi again Greg,

Many thanks for agreeing to have a listen. Let me start where the book starts, with my favorite anomaly. It has to do with the structure of history.

There are about 600 historical hereditary dynasties, going back to 3,500 BCE, for which we have (a) reasonable confidence about dating, and (b) at least eight rulers in succession.

Now suppose we were to do an experiment, to see if there was any kind of trend in reign lengths. We would take each dynasty, with N rulers, and compute average reign length for the first N/2 rulers compared with the last N/2. Pretty simple, and if we exclude really early dynasties, our confidence in the data would be very, very high. If we know anything about history, we know the dates when kings took power. And of course reign stability is a matter of the utmost seriousness for any government, so this is not just an idle exercise in numerology.

So what would you expect?

1) No trend -- as many dynasties show shorter reigns in the second half as longer.

2) Upward trend in stability -- most dynasties show longer reigns in the second half

3) Downward trend in stability -- most dynasties show shorter reigns in the second half

I am sure you realize that the expected answer, under classical statistics, is (1). But the actual answer is weirder than most people can imagine. Possibly even yourself, which, considering what youve written over the years, is saying something.

They decline in the second half, very consistently. But here's where it starts to get weird. It's a scale-invariant function, a power-law curve. If there are just 10 rulers in the series, then by ruler #10 the cumulative average has fallen by half. If there are 100 rulers in the series, then by #100 the average has fallen by half again. Thus scale invariance: If I was counting rulers in groups of 10, then by the 10th group of 10 we'd be at half of the value of the first group of 10. The first ruler will last 40 years, the 10th will last fewer than 20, the 100th will last somewhat fewer than 10, and so on down. It works for several hundred popes, several hundred Japanese emperors, all the Roman emperors, and many, many shorter dynasties. It works across nearly three orders of magnitude, which is pretty good for a rule of this kind.

Individual dynasties vary, obviously, so in a given case the drop at #10 might not be 50 percent, it might only be 30 percent, or it might be 80. But the tendency is very clearly this scale invariant curve.

Now, moving up the scale a little, there are lots of places -- Egypt, China, Imperial Rome -- where there were numerous successive dynasties. As many as 30 dynasties in Egypt, for example. So we can do the same trick with them, a sort of "set of sets" analysis. And when we do, we get the same scale-invariant curve, same slope. By the 10th dynasty we're at half the longevity of the first dynasty.

This is not a property predictable from the behavior of the individual rulers. In fact, it's rather counter-intuitive. We could justify individual dynasties declining, the later rulers being weaker, because metaphorically that's the "life cycle" of a dynasty. But then how does the set-of-sets constitute a life? Why is it weaker later on as well? It's a whole series of lives, using the metaphor, and the consistency of the curve implies that these later "lives" are impaired from their start. It's not just that Ruler X of Dynasty Y is in a bad position relative to Ruler 1 of Dynasty Y. It's also that Ruler 1 of Dynasty Y is in a bad position relative to Ruler 1 of Dynasty 1. The whole "genus" of dynasties is weakening, with each individual "species" weakening as well, consistently producing shorter-lived individuals later in the series.

It's a cycle, a kind of sawtooth leading downward, with successively weaker "new starts" for the first ruler of each dynasty. This is a mathematically verifiable cycle of history, a very well-defined one. It's hardly even worth the bother of applying goodness-of-fit tests, you can see just by looking at the graphs that it's a real pattern. A significance test would go off the scale at the high end, odds of trillions to one against chance.

Okay, weird enough for you? Not done yet. I plotted the entire set of dynasties in chronological order, all the dynasties known to history going back to 3,500 BCE. This is "set of sets of sets" analysis. Same scale-invariant curve AGAIN. It's fractal, it just keeps working no matter what scale you try. The lengths of hereditary dynasties started out at a phenomenally stable average of around 600 years, way back in the mists of time. They fell along this steady curve for roughly 4,900 years, until by the 19th century a hereditary dynasty could expect to last a mere 52 years. That is, a "hereditary" dynasty no longer lasted as long as a human lifetime. At that point, the institution of hereditary rule collapsed altogether.

What this means is that all of recorded history is -- in some very tangible, testable, literal, mathematically objective sense -- a unity. It is *one thing*, one process, one unfolding rule for a complex set of sets. The entire process could have been predicted by a competent mathematician starting about 2,000 years ago. Because the curve is scale-invariant, he would only have needed to see the first 50 to 100 dynasties or so, and the rest would logically follow.

Forget Hegel and Marx. Here we have a genuine historical dialectic, an inexorable working-out of an tremendous underlying principle that somehow reaches out across all of space and time. Think of all the social norms, all the institutions, that revolve around regime stability. Think what it means to a society when control changes hands. Think how many factors regime stability is supposed to depend on -- wars, riots, aristocratic plotting, famines, religious conflicts, technology changes, language, culture. It's endless. And yet it's all predictable? It's a mathematical law?

Yes. In fact, it is a maximum entropy curve, or more precisely a family of them, consistent with Jaynes Bayesian approach. And this very same curve shows up in hundreds of other places.

I'll pause here to let you and any onlookers ponder.

Cheers, Dean

*From: Mike Glosson
Location: San Diego, CA
Date: 07/18/2009*

Well...hmmmm...some things of interest, but you almost derailed the whole thing as Rationalization for New Agey ESP with the paragraph with the line of "Spooky action at a distance" and the following paragraph a near apology for failed ESP experiments.

As some one who has lead of life of at times Improbable outcomes and Minimum Probabilities tending to dominate the outcomes then then flip flopping with Maximum Probabilities new models are always welcome...but it's more of an pre-conscious procesing event these days leading to a very Taoist like approach to the whole thing as probility being rulled by "Whim" "Kicks" "Slack" and "Stupid Human Tricks"

650 Pages of Anomalies? Welcome to my world.

**From: Greg Bear**

Date: 07/28/2009

Flipping coins gives just two outcomes--plus the rare edge shot, of course. What about complex systems where the outcomes are nearly infinitely varied? That's where probability either delivers gas-law certainties or might be open to more sophistication.

**From: Greg Bear**

Date: 07/28/2009

Only humans do statistics, of course. (Computers do statistics for humans.) I think all of math, as a condensed subset of natural languages, is ultimately concerned with the needs of biological beings... namely, us.

**From: Greg Bear**

Date: 07/28/2009

Limited compared to what? Human 2.0...

**From: Greg Bear**

Date: 07/28/2009

Very cool. It seems probable to me that what we're seeing here is human culture as a whole adjusting its own learning curve to maximizing individual benefit, rather than just letting rulers rule indefinitely. More of a stretch: the longer civilization endures, the more democracies become probable, even likely?

*From: Dean Brooks
Location: Vancouver, Canada
Date: 07/28/2009*

Wow, a lot of responses. I'm very pleased.

Scott wrote: "I could be convinced there is a flaw in standard theory, but there would have to be some reason for it. Standard probability rules make logical sense to me in at least when applied to simple systems. I could believe the reality is different, but not without there being a reason for it.

Flip a coin nine times and each time it comes up heads. On the tenth flip most people would have a strong belief that the odds of it being heads again were below 50 percent. I would still go with the 50 percent odds if I was betting, even though my gut instinct would try to tell me otherwise."

Greg said something similar. This is indeed the critical point. If you can "see" where classical theory fails for binary outcomes, the rest becomes much easier.

Edwin Jaynes posed the problem this way. When you look at a system, what are you measuring? As the system grows more complex, your frame of reference changes. For example, if I look at single gas molecules, I am measuring two values -- velocity and direction. If I look at them in aggregate, I am measuring pressure and temperature -- a very different measure.

This turns out to be relevant for all kinds of probabilistic systems. Our way of thinking about probability obscures what ought to be a general rule.

If I am flipping one coin in isolation, then much as I would for a single gas molecule, I enumerate all the possible states. The gas molecule can move in three dimensions, with a range of values. That is the set of possible states. The coin has two possible states (ignoring the edge for now).

If I am flipping many coins, then I am dealing with a different system, and a different measure. The coins could all come up heads at once, or tails. There are millions or trillions of permutations even for a relatively small system.

The classical argument is that each flip is effectively isolated from the others. There is no rule connecting one to the next in a series, or that coin flipping over there to this one here. But there is a hidden assumption here, an assertion that we carry over from the single-coin case that is not actually true.

In the single-coin case we assert that the odds are about 50-50 *over a long series of throws*. We are in effect making a prediction about the system *at equilibrium*. Right?

In the multi-coin case, say 20 coins, there are more than one million permutations. If I flip the 20 coins 100 times, how much of the total range of possible outcomes will I have explorted at that point? 1/10,000th. The multi-coin system is NOT at equilibrium. It is nowhere near equilibrium.

We have all kinds of examples from nature that serve as a cautionary in this regard. Thermodynamics is riddled with very hard problems that we still cannot solve, because there is no equilibrium state to simplify the calculation.

The obvious rejoinder is that 100 flips is enough for any one coin to come to equilibrium. Yes. But now we come to the critical point, the insight where the light bulb goes on. I hope.

You could test the balance of heads to tails for each individual coin and find nothing interesting. They ARE at equilibrium, by that particular measure of the system. The ratio of heads to tails results is conserved.

Yet at the same time, if you test the distribution of higher-order patterns, you will find a very striking non-classical trend. The 20 coins viewed as a system exhibit a separate, independent kind of behavior that is not predictable based on extrapolations from the behavior of one coin.

This is no different, no more paradoxical, than the assertion that, if you follow individual gas molecules around, their behavior maintains conservation of momentum -- yet if you view the behavior of a volume of gas that is not at equilibrium, there are pressure waves and dispersal effects going on at every scale.

What is particularly interesting is that binary random number generators exhibit these types of large-scale or long-run anomalies, and have done so since the first "chance machines" were developed. They have been observed, and filed away, by a long succession of investigators. They were present in the first million-digit random number project undertaken by the RAND corporation. They are present in modern Keno games, in the MS Excel spreadsheet logic, and in many other places.

The reason they are not a matter of intense controversy already is because the industry standard for random number generation (that is, the test of whether the sequence is random) is based on small slices of data. The DieHard test, which was the standard for many years, and the new NIST test, are both based on discrete chunks of about 1,000 bits of data at a time. Long-run trends are simply invisible to this kind of test. (Say I was looking for a surplus of cases where 1 appeared 15 times, and 0 only 5 times, in 20 bits. This is already so unlikely that it will only happen a few times in 1,000 bits. To detect any sort of trend would require much larger sample sizes. So it passes totally unnoticed by industry-standard monitoring.)

The insight of Jaynes is that a system can exhibit weird collective behavior that is not merely a simple linear sum or permutation of the behavior of individual components. Specifically, it exhibits increasing entropy, with rare configurations starting out more common than expected, and becoming rarer as the set of observations grows.

I hope that makes sense. More in my next post.

*From: Dean Brooks
Location: Vancouver, Canada
Date: 07/29/2009*

Mike wrote: "Well...hmmmm...some things of interest, but you almost derailed the whole thing as Rationalization for New Agey ESP with the paragraph with the line of 'Spooky action at a distance' and the following paragraph a near apology for failed ESP experiments."

Yes. I was aware I was taking a risk by saying it that way. Here is a fully exposition of those points, which should satisfy readers that I am not New Agey. I am Old Jaynesy.

My reference to 'spooky action at a distance' should be understood as referring to the eerie coordination of dynastic lengths. In that context, I contend that it is mathematically very similar to the sort of spookiness observed in quantum mechanics. Specifically, we can predict the behavior of dynasty N simply on the basis that it is dynasty N. Our prediction is robust, and the difference between dynasty N and a later dynasty 2N or 10N is large.

The difference in behavior between the two dynasties is actually larger than, say, the difference observed between the first and second photons in the classic QM polarization filter experiment. It follows from a similarly simple rule of "entanglement," which explicitly states that the fates of the later dynasties are conditional on the earlier ones.

I am being provocative here, using language that will turn hairs, because there is no other way to approach the problem than to turn assumptions upside down. In QM, the second photon is believed to behave differently than the first BECAUSE it comes second, and not for any other reason that we can observe. This then leads to all sorts of weird mystical formulations regarding whether the photon really exists in a definite state before being observed, and so on. Jaynes argued that this same kind of very economical rule could be found elsewhere, and wherever it might be found, it should be classed as a rule of inference, not as a model of actual behavior.

In other words, unlike the photons, we KNOW that the dynasties are real and exist at all times. So if it is possible to make far-reaching inferences about future events on the basis of very little information, it is not because dynasties exist as probability clouds. This should then cast doubt on whether the probability-cloud analogy has any value in QM.

In effect, Jaynes argued for a return to the hidden variables theory of QM, not because he had any special insight into QM as such, but because when you use maximum entropy curves to forecast behavior, there are ALWAYS myriad hidden variables. The QM paradoxes turn out to be not at all special; the way we reason about everyday problems, even coin flips, becomes very much like the way we reason about QM. There is a consolidation of the logic, so that instead of two separate realms, QM and classical, there is one realm consistently governed by one method.

Now as for ESP. There was an Oxford mathematician, George Spencer-Brown, who later became famous for his work on the Laws of Form. In 1957, he published a little book entitled Probability and Scientific Inference. It was about the inherent limitations of "chance machines" as they were known back then. Spencer-Brown observed that ESP experiments invariably produced a short burst of rare matchups, in which the subject guessed the right number far above chance, and then a long decline, which if prolonged sufficiently would take the guesses well below chance. (This is what ESP enthusiasts call "psi-missing".)

Spencer-Brown asked a very simple question: What happens if we substitute another chance machine for the human subject? It turns out that the same thing happens, that is, a brief interval of unlikely coordination followed by a long decline. Spencer-Brown gave a number of examples from the literature, ran some new experiments of his own, and concluded that there was something wrong with probability theory as such. He regarded early ESP investigators as having found nothing whatever in terms of sensory perception -- no ESP -- but having built up a great deal of evidence for a problem with probability theory.

His arguments produced a brief furor, with letters in Nature and several other journals. (In the 1950's, you could still get ESP papers into Nature.) Unfortunately, Spencer-Brown could not offer a replacement theory that would predict the long, slow decline that he observed. The debate died out. Although Spencer-Brown had a long career and was justly famous for Laws of Form, his attack on probability has basically sat on a shelf for the past 40-50 years.

What I have done is to repeat Spencer-Brown's experiments, getting the same result but on a much larger scale -- and then to use Edwin Jaynes and Bayesian logic to explain them.

So no, I am not an ESP nut. I don't think consciousness shapes reality, or can see around corners. I think random number generators produce a distinct kind of high-level ordered behavior that evolves with the number of trials generated, because it is in the nature of all complex systems to do that. I think putting a human in the loop to replace a random number generator produces the same result. So the human can, for a short time, produce unexpectedly high correlation between his guesses and the results produced by the machine he is playing against. If he persists against a given machine, in a given session, he will lapse into "psi-missing" much as described in the literature. In effect, we only think this is interesting because we don't understand something basic about probability. Once we approach probability from a Bayesian, maximum entropy standpoint, the whole subject of ESP ceases to have separate existence. It becomes a trivial footnote to the basic rules of probability.

One last point -- my website is several years out of date and not really intended to explain all this. There are some good resources there regarding size laws and distribution theory, but most of what I am arguing just isn't on the Net anywhere.

I'll pause now and see what these two posts bring in the way of responses. Many thanks to all, I'm enjoying this.

*From: Andrew Carpenter
Location: Cauterets, France
Date: 08/01/2009*

Dear Greg,

I wonder if all that we hear of the benefits of democracy work in all cases. Sure there is democracy in England; one of the homes of that school of thought, yet we have seen Police abuse of Citizens on a scale that has never been seen before. These folk who cover their faces and deliberately hide their credentials as officers of the law, openly murdered a member of the public.

I'm sure that the policeman involved didnt expect to kill someone that day, but I doubt that he lost much sleep over it until he was caught on camera.

Democracy? the World is in a horrible media infused nosedive! and as far as I can see there is no exit from this mess. In the UK alone my childrens grand children will still be paying for our National Debt.

Christ, I would never have believed that this kind of strife would be here in this day and age..I was nine years old when N.A first touched his toe on alien soil..do you know ..it was a magic moment..Greg we need another magic moment.

Bon Dormir from France

Andrew

*From: Steven Miller
Location: Williams College
Date: 12/05/2009*

To: Dean Brooks

Re: Benford's law

Greetings. I'm a professor in the math/stats department at Williams College. I'm currently teaching probability, and someone was kind enough to pass along your article on Naked-Eye Quantum Mechanics. I've done a lot of work in Benford's law (my homepage is http://www.williams.edu/go/math/sjmiller/public_html/index.htm); if you could drop me an email at sjm1 AT williams.edu, I'd love to chat with you. //s

**From: Greg Bear**

Date: 12/17/2009

Dean, meet Steven Miller!

*From: Dean Brooks
Location: Vancouver, Canada
Date: 12/17/2009*

Many thanks Greg, I am writing to Steven.

The discussion kind of petered out, sadly, but in the meantime I have shown my book to O'Reilly (the software manual people) and they are very interested. My offer to have you read the book still stands, if you have time.

**From: Greg Bear**

Date: 12/17/2009

I'd enjoy seeing a copy when you get it finalized. I don't claim expertise here, but my instincts tell me you have some interesting ideas worth getting acquainted with. I've met Mr. O'Reilly and his people... they could do good work with your book.