I tend to be very skeptical about news articles where the headline reads: “Scientists have found that…”. Too often the scientist or researchers involved aren’t even named and there is no way of digging in further to see if it was indeed professional scientific work that led to the conclusion.
This is especially annoying as research that makes headlines is usually negative and or contradictory stuff: something may cause cancer, something contradicts common beliefs, something radically new has been discovered or invented. And these are exactly the research findings that are most likely to be wrong. The criticism that may follow the new discoveries hardly ever makes the news so the public sits up with the headline as THE TRUTH and never learns that the research was fundamentally flawed or the “scientist” was actually a fabulation of a reporter at “The News Of The World”.
I just had a period of exams. Don’t get me wrong, I think I did ok on most of them so this entry is not to blame “it” on something, but I started thinking about the nature of exams, what they are for and how they are performed.
To my best knowledge, exams today are executed more or less the same way they were 100 years ago. The details may vary, but usually it is something like this: The student has a certain curriculum he or she must study to understand and memorize. The exam is then supposed to reflect a statistical example of this curriculum and the result to reflect approximately how much of the curriculum the student has been able to take in.
I can see how this has been a very good format over a century ago, but the times have changed, while exams haven’t. The reality most of us deal with today includes a lot more information than people had to deal with back then and a lot more tools to handle the vast amount of information than was imaginable at the time. I think today’s exams should be more about how well people are able to use those tools to work on their projects than about memorizing various lists that could e.g. be Googled in seconds.
A long time ago I heard about a funny paradox. The paradox was about the lowest integer number that was not special in any way. “Special numbers” were defined by certain rules. Even numbers were special, so were prime numbers, any multiple of 5, 2 in any power and any number with two digits alike. There may have been a few more, but they all made sense in the way that the numbers they defined somehow “felt” special.
Finally, the lowest number that was not special, is of course special for the very reason that it was the lowest number that was not special, so in turn we would have to look for another number that would be the lowest number that was not special and so on ad infinitum.
I don’t remember the source of this paradox, but I’m going to suggest another similar one. What is the lowest integer number that can not be found with Google? When you find one, you must post it on the web (e.g. in a comment to this post). It will then be indexed on Google and is no longer the lowest number that can not be found on Google, so that the hunt continues.
The Famous Brett Watson has written a detailed and intelligent response to my entry “Breeding Shakespeare, Not Typing“. In my entry I discussed that while a thousand monkeys typing randomly might not reproduce so much as a single quote from the works of Shakespeare – ever, a thousand monkeys with minimum understanding of the theory of evolution could actually reproduce the entire works of Shakespeare in relatively short time using very simple methods.
With Watson’s permission I post his email response below. He has many good observations. I still have a few objections and intend to answer soon, in the meantime enjoy Brett’s reasoning.
“A thousand monkeys, typing on a thousand typewriters will eventually type the entire works of William Shakespeare.”
This quote is often attributed to Thomas Huxley, Darwin’s most faithful followers in the debate that followed the publication of “Origin of Species” in 1859. Other versions of the quote have a million monkeys or an infinite number of monkeys in each case typing on equally many typewriters. Huxley is said to have used it as a metaphor to argue that chance alone would eventually result in the diversity of life on Earth. The story of Huxley is not true, but regardless of who came up with the notion, it certainly is thought provoking.
Calculations show that even the monkeys just typing “To be or not to be, that is the question.” would be incredibly unlikely to happen by chance. I however decided to attempt to breed Hamlet rather than using pure chance and the results were quite interesting.
In the Wetware post last week on “A New Way to Fight Blog Comment Spam” I proposed methods that would prevent robots from posting comments. Kalsey commented that there are clear indications that many spam comments are actually posted manually, rather than by robots, rendering my proposed functionality obviously useless in these cases.
The day after Kalsey’s comment, I was reading a paper by Richard Gatarski called “Artificial Consumers: A Role for Computers as Subjects in Consumer-Related Marketing“. In the paper Richard makes convincing arguments that computers are in fact consumers in our world, as they – among other consumer characteristics – consume bandwidth, processing power and information and interact with humans and each other. Richard’s presentation slides give a quick overview of the main concepts.
These two accounts had me thinking about the role that robots play on the Internet. Even more than Richard I’m now not only convinced that robots should be considered consumers, but that they are arguable the most important consumers that visit many websites.
This entry is adapted from a presentation I did at the University of Iceland today, hence all the decorations.
Using nature as a role model in design is one of my biggest interests. By this notion I’m talking about how we can study nature and use its solutions, designs and methods when making our own designs and technologies, a practice often referred to as biomimickry.
I had an exam in the Philosophy of Science this week, so I’m still somewhat on the philosophical note. Science has of course interested me for a long time, but I had not really taken a good look at the foundations before. This should of course be obligatory for anyone that wants to be a scientist. If you really expect to become a scientist, looking at the world with critical eyes – one of the most obvious things to be critical about is of course the methodology or framework you’re working within.
Anyway, that was not what I was going to write about. One of the main subjects of the exam was scientific knowledge, how it’s accumulated and how it is linked, building up our interwoven web of knowledge. Some theories say that all science is one fact building on many others and so on until we reach an axiom, something that is taken to be so granted that it needs no further explanation.
My question here is: if this is the case, shouldn’t we be able to computerize our scientific knowledge? And in any case, are we doing enough to make sure that the web of scientific knowledge is as tightly interwoven as it could and should be?