资源预览内容
第1页 / 共14页
第2页 / 共14页
第3页 / 共14页
第4页 / 共14页
第5页 / 共14页
第6页 / 共14页
第7页 / 共14页
第8页 / 共14页
第9页 / 共14页
第10页 / 共14页
亲,该文档总共14页,到这儿已超出免费预览范围,如果喜欢就下载吧!
资源描述
Click to edit Master title styleClick to edit Master subtitle style*1Computational Cognitive Neuroscience Lab Today: Model LearningComputational Cognitive Neuroscience Lab Today: Homework is due Friday, Feb 17 Chapter 4 homework is shorter than the last one! Undergrads omit Hebbian Learning “Neurons that fire together, wire together” Correlations between sending and receiving activity strengthens the connection between them “Dont fire together, unwire” Anti-correlation between sending and receiving activity weakens the connectionLTP/D via NMDA receptors NMDA receptors allow calcium to enter the (postsynaptic) cell NMDA are blocked by Mg+ ions, which are cast off when the membrane potential increases Glutamate (excitatory) binds to unblocked NMDA receptor, causes structural change that allows Ca+ to pass throughCalcium and Synapses Calcium initiates multiple chemical pathways, dependent on the level of calcium Low Ca+ long term depression (LTD) High Ca+ long term potentiation (LTP) LTP/D effects: new postsynaptic receptors, incresed dendritic spine size, or increased presynaptc release processes (via retrograde messenger)Fixing Hebbian learning Hebbian learning results in infinite weights! Ojas normalization (savg_corr) When to learn? Conditional PCA-learn only when you see something interesting A single unit hogs everything? kWTA and Contrast enhancement - specializationPrincipal Components Analysis (PCA) Principal, as in primary, not principle, as in some idea PCA seeks a linear combination of variables such that maximum variance is extracted from the variables. It then removes this variance and seeks a second linear combination which explains the maximum proportion of the remaining variance, and so on until you run out of variance.PCA continued This is like linear regression, except you take the whole collection of variables (vector) and correlate it with itself to make a matrix. This is kind of like linear regression, where a whole collection of variables is regressed on itself The line of best fit through this regression is the first principal component!PCA cartoonConditional PCA “Perform PCA only when a particular input is received” Condition: The forces that determine when a receiving unit is active Competition means hidden units will specialize for particular inputs So hidden units only learn when their favorite input is availableSelf-organizing learning kWTA determines which hidden units are active for a given input CPCA ensures those hidden units learn only about a single aspect of that input Contrast enhancement - drive high weighs higher, low weights lower Contrast enhancement helps units specialize (and share)Bias-variance dilemma High bias-actual experience does not change model much, so biases better be good! Low bias-experience highly determines learning, so does random error! Model could be different, high model varianceArchitecture as Bias Inhibition drives competition, and competition determines which units are active, and the unit activity determines learning Thus, deciding which units share inhibitory connections (are in the same layer) will affect the learning This architecture is the learning bias!Fidelity and Simplicity of representations Information must be lost in the world-to-brain transformation (p118) There is a tradeoff in the amount of information lost, and the complexity of the representation Fidelity / simplicity tradeoff is set by Conditional PCA (first principal component only) Competition (k value) Contrast enhancement (savg_corr, wt_gain)
收藏 下载该资源
网站客服QQ:2055934822
金锄头文库版权所有
经营许可证:蜀ICP备13022795号 | 川公网安备 51140202000112号