goglsk.blogg.se

Light bulb aha moment
Light bulb aha moment













Rather than deciding after three meetings how many more meetings there will be, you can announce that at the end of those decided-upon number of meetings, you will allow yourself one more first-order decision about how many meetings there will be. This decision about how many more meetings to have is a “first-order decision.”Īnd committees can be much more powerful than that. Now the committee is not stuck doing some fixed number of meetings, but can, instead, have three meetings to decide how many meetings it needs. That one decision about how many more meetings to allow gives the committee greater computational power. Instead, you might announce at the first meeting that there will be three initial meetings, and that at the third meeting the committee will decide how many more meetings will be needed. Usually, however, it will not be possible to know how many meetings will be needed. If the task is easy, you may be able to announce at the first meeting that there will be exactly, say, 13 meetings. Imagine you have been placed on a committee, and must meet weekly until some task is completed. Instead, the notion of “number” in the self-monitoring report is more subtle (concerning something called “transfinite ordinal numbers”), and can be best understood by your and my favorite thing… But that would require machines to know exactly how many steps they need to finish an algorithm, and that wouldn’t allow machines to compute much. On the basis of my description of self-monitoring machines above, one might suspect that I demanded that the machine’s “self-monitoring report” be the number of steps left in the algorithm. And, perhaps I would be able show that some problems are not monitorable at all -– and thus their solutions necessitate Eureka moments. But, I wondered, perhaps I would be able to prove that some problems are more difficult to monitor than others. If a problem could be solved via a self-monitoring machine, then that machine would come to a solution without a Eureka moment. What was the point of these machines? I was hoping to get a handle on the unanticipatability of ideas, and to understand the extent to which Eureka moments are found for any sophisticated machine. And, I demanded that the machine’s report not merely be a probabilistic guess, but a number that gets lower on each computation step. Rather than having a machine simply follow an algorithm, I required that a machine also “monitor itself.” What this meant was that the the machine must at all stages report how close it is to finishing its work. In the late 1990s I began work on a new notion of computing which I called “self-monitoring” computation. On the other hand, what if it is much deeper than that? What if the unplannability of ideas is due to the nature of ideas, not our brains at all? What if the computer brain, Hal, from 2001: A Space Odyssey were to say, “Something really cool just occurred to me, Dave!” Had our brains evolved differently, perhaps we would never have Eureka moments.

light bulb aha moment

Perhaps ideas cannot be planned because of some peculiarity of our psychology. Unplanned ideas are often best illustrated by ‘Eureka!”, or ‘Aha!’, moments, like Einstein’s clock tower moment that sparked his special relativity, or Archimedes’ bathtub water-displacement idea. Three impossible thoughts before breakfast we can manage, but one great idea before dinner we cannot. That’s why grant proposals never wrap up as, “And via following this four-part plan, I will have arrived at a ground-breaking new hypothesis by year three.”

light bulb aha moment

We stumble upon ideas, and although we can sometimes recall how we got there, we could not have anticipated the discovery in advance. Big ideas can’t be planned like growing tomatoes in one’s garden.















Light bulb aha moment