Monday, May 31, 2010
Saturday, May 22, 2010
Mathiness
"I was introduced to the expression by a mathematician who was an expert in the many hierarchies of mathematical logic, typically infinite sequences of types of sets definable by some class of formulas. Each step is defined by some critical bump in complexity or definitional power which can't be achieved at a lower level, and then one looks at what it takes to get beyond the whole sequence. One of the prof's PhD students, working in this area, punned on the Latin by titling his thesis Ad Astra Per Aspera.
Once you go high enough into one of these hierarchies, called the projective sets (of real numbers, or subsets of higher dimensional R^n), there are all kinds of interactions with the highest infinities. Assume bigger infinites and you get more structure and organization "down below". There are a bunch of mathematicians for whom this is the holy grail, to figure out how far out to go into the infinite, based on these more "concrete" consequences.
Others think this is pure moonshine, kind of a mathematical ideology. The originator of this line of thought, though, was Kurt Godel, who was a kook and believer in the reality of the mathematical infinite (he also starved himself to death in Princeton after his wife died, he was paranoid about people poisoning him I think). So the research programme has the sanctification of genius, and that goes a long way in math."
Once you go high enough into one of these hierarchies, called the projective sets (of real numbers, or subsets of higher dimensional R^n), there are all kinds of interactions with the highest infinities. Assume bigger infinites and you get more structure and organization "down below". There are a bunch of mathematicians for whom this is the holy grail, to figure out how far out to go into the infinite, based on these more "concrete" consequences.
Others think this is pure moonshine, kind of a mathematical ideology. The originator of this line of thought, though, was Kurt Godel, who was a kook and believer in the reality of the mathematical infinite (he also starved himself to death in Princeton after his wife died, he was paranoid about people poisoning him I think). So the research programme has the sanctification of genius, and that goes a long way in math."
Thursday, May 20, 2010
Reinventing the Wheel
This has been exactly my sentiment for some time. Unless you build a system from the ground up, you don't understand it and cannot improve upon it:
Reinventing the Wheel
Reinventing the Wheel
Wednesday, May 19, 2010
Computer Composers
Apparently the livelihood of musicians is at stake; fear the robots! They are also trying to take over the jobs of scientists with Machine Learning.
Computer Composer
Computer Composer
Tuesday, May 18, 2010
Bayesian vs. Frequentist analysis
Reasons besides the use of subjective priors why Bayesian and Frequentist approaches are different:
"There is a popular myth that states that Bayesian methods differ from orthodox (also known as “frequentist” or “sampling theory”) statistical methods only by the inclusion of subjective priors that are arbitrary and difficult to assign, and usually do not make much difference to the conclusions. It is true that at the first level of
inference, a Bayesian’s results will often differ little from the outcome of an orthodox attack. What is not widely appreciated is how Bayes performs
the second level of inference. It is here that Bayesian methods are totally different from orthodox methods. Indeed, when regression and density estimation are discussed in most statistics texts, the task of model comparison is virtually ignored; no general orthodox method exists for solving this problem.
Model comparison is a difficult task because it is not possible simply to choose the model that fits the data best: more complex models can always fit the data better, so the maximum likelihood model choice would lead us inevitably to implausible overparameterized models that generalize poorly. “Occam’s razor” is the principle that states that unnecessarily complex models should not be preferred to simpler ones. Bayesian methods automatically and quantitatively embody Occam’s razor (Gull 1988; Jeffreys 19391, without the introduction of ad hoc penalty terms. Complex models are automatically self-penalizing under Bayes’ rule."
MacKay-Bayesian Interpolation
"There is a popular myth that states that Bayesian methods differ from orthodox (also known as “frequentist” or “sampling theory”) statistical methods only by the inclusion of subjective priors that are arbitrary and difficult to assign, and usually do not make much difference to the conclusions. It is true that at the first level of
inference, a Bayesian’s results will often differ little from the outcome of an orthodox attack. What is not widely appreciated is how Bayes performs
the second level of inference. It is here that Bayesian methods are totally different from orthodox methods. Indeed, when regression and density estimation are discussed in most statistics texts, the task of model comparison is virtually ignored; no general orthodox method exists for solving this problem.
Model comparison is a difficult task because it is not possible simply to choose the model that fits the data best: more complex models can always fit the data better, so the maximum likelihood model choice would lead us inevitably to implausible overparameterized models that generalize poorly. “Occam’s razor” is the principle that states that unnecessarily complex models should not be preferred to simpler ones. Bayesian methods automatically and quantitatively embody Occam’s razor (Gull 1988; Jeffreys 19391, without the introduction of ad hoc penalty terms. Complex models are automatically self-penalizing under Bayes’ rule."
MacKay-Bayesian Interpolation
Thursday, May 6, 2010
Gelman comments on pointless and unethical decision research
Even if we knew this theory was true with certainty, how would it help us at all?
Gelman comments on pointless and unethical decision research
Gelman comments on pointless and unethical decision research
Subscribe to:
Posts (Atom)