tag:blogger.com,1999:blog-8014727566107878506.post3730598672271970454..comments2024-05-08T05:06:46.491-07:00Comments on Tech Stuff: Demosaicing: Normal, edge aware and edge weighted!Anonymoushttp://www.blogger.com/profile/12555542384292146876noreply@blogger.comBlogger1125tag:blogger.com,1999:blog-8014727566107878506.post-19156439826180841462012-01-31T16:22:38.100-08:002012-01-31T16:22:38.100-08:00For a long time now, robotics folks have harped on...For a long time now, robotics folks have harped on the idea of<br /> "grounded representations". As in, the idea that your models are<br /> only meaningful when you can follow them down to observable<br /> properties and predictions in the real world. This generally<br /> applies to representations about objects and stuff: a robot<br /> understands a cup better than an image retrieval algorithm,<br /> because it understands the affordances or dynamics of the cup.<br /> <br /> So here's my thought: perhaps many concepts are only really<br /> learned/usable only when they're grounded out in (perhaps many of)<br /> the abstractions they're built on. This as opposed to being an<br /> abstract computational object that's usable right away, as soon as<br /> its referents become available. For example, maybe I could pick<br /> up an algebraic topology book and read all about manifolds, and<br /> then hit the ground running in a computer graphics course. More<br /> likely though, it'd be easier to go the other way around: knowing<br /> that a manifold might be used to describe the surface of a lake<br /> somehow HELPS understand that in a manifold "every point has a<br /> neighborhood homeomorphic to an open Euclidean n-ball".<br /><br /> The thing that made me think of this was the POMDP stuff I was<br /> reading today. When I first took Mike's class in 2008, POMDP math<br /> was a mystery to me - I had no hope of understanding how it<br /> worked. Interestingly, I had read explanations of bayes rule many<br /> times, but didn't really *get it*.<br /><br /> However, sometime between 2008 and now I learned a bit about MDPs.<br /> In particular, from the humanoids paper, quals, the PGSS project,<br /> the NAMO project, and that ISYE stats class, I got a solid<br /> understanding for what it meant to take an expectation over<br /> rewards or transitions in an MDP. I knew the words before, but<br /> now i've done it on paper and in code. And it turns out that<br /> having done those things made it easy to understand the belief<br /> updates for the POMDP, despite the fact that i'd read all about<br /> expectations and bayes rule before 2008. Somehow, without<br /> knowingly drawing an analogy to expected rewards, the belief state<br /> updates made perfect sense.<br /><br /> So what gives? If the theory that learning of abstract concepts<br /> is modular to those abstract structures, then they should be<br /> happily in place for me to apply to POMDPs. The only catch should<br /> be that I can't get any further than I did with MDPs. <br /><br /> There's all sorts of problems with these explanations and<br /> examples, but what I'm trying to say is that I'm suspicious of the<br /> idea of strong modularity of mind. As I think any expert in<br /> anything would attest, abstract knowledge is won through years of<br /> experience, in which presumably you're fleshing out the semantics<br /> of the abstract concepts from bottom to top. <br /><br /> Or as frank puts it, "math isn't something you learn, it's<br /> something you get used to".doctorscholls00https://www.blogger.com/profile/00764849527634001846noreply@blogger.com