资源预览内容
第1页 / 共138页
第2页 / 共138页
第3页 / 共138页
第4页 / 共138页
第5页 / 共138页
第6页 / 共138页
第7页 / 共138页
第8页 / 共138页
第9页 / 共138页
第10页 / 共138页
亲,该文档总共138页,到这儿已超出免费预览范围,如果喜欢就下载吧!
资源描述
RANLP2015,Hissar,BulgariaDeepLearninginIndustryDataAnalytics JunlanFeng ChinaMobileResearch1人工智能的起点:达特茅斯会议1919-20011927-20111927-20161916-2011NathanielRochester人工智能的阶段1950s1980s2000sFuture 自动计算机如何为计算机编程使其能够使用语言神经网络计算规模理论自我提升抽象随机性与创造性基于规则的专家系统通用智能123人工智能的当前技术:存在的问题1.依赖大量的标注数据2.“窄人工智能”训练完成特定的任务3.不够稳定,安全4.不具备解释能力,模型不透明人工智能的当前状态:应用人工智能成为热点的原因:深度学习,强化学习大规模的,复杂的,流式的数据概要1.解析白解析白宫人工智能研人工智能研发战略略计划划3. 深度学深度学习及最新及最新进展展2. 解析十家技解析十家技术公司的的人工智能公司的的人工智能战略略4. 强化学化学习及最新及最新进展展5. 深度学深度学习在企在企业数据分析中的数据分析中的应用用美国人工智能战略规划美国人工智能研发战略规划策略-I:在人工智能研究领域做长期研发投资目标:.确保美国的世界领导地位.优先投资下一代人工智能技术1.推动以数据为中心的知识发现技术高效的数据清洁技术以,确保用于训练系统的数据的可信性(varascty)和正确性(appropriateness)综合考虑数据,元数据,以及人的反馈或知识异构数据,多模态数据分析和挖掘,离散数据,连续数据,时间域数据,空间域数据,时空数据,图数据小数据挖掘,强调小概率事件的重要性数据和知识尤其领域知识库的融合使用策略-I:在人工智能研究领域做长期研发投资目标:.确保美国的世界领导地位.优先投资下一代人工智能技术1.推动以数据为中心的知识发现技术2.增强系统的感知能力硬件或算法能提升系统感知能力的稳健性和可靠性提升在复杂动态环境中对物体的检测,分类,辨别,识别能力提升传感器或算法对人的感知,以便系统更好地跟人的合作计算和传播感知系统的不确定性给系统以便更好的判断策略-I:在人工智能研究领域做长期研发投资目标:.确保美国的世界领导地位.优先投资下一代人工智能技术1.推动以数据为中心的知识发现技术2.增强系统的感知能力当前硬件环境和算法框架下AI的理论上限学习能力语言能力感知能力推理能力创造力计划,规划能力3.理论能力和上限策略-I:在人工智能研究领域做长期研发投资目标:.确保美国的世界领导地位.优先投资下一代人工智能技术1.推动以数据为中心的知识发现技术2.增强系统的感知能力目前的AI系统均为窄人工智能,“NarrowAI”而不是“GeneralAI”GAI:灵活,多任务,有自由意志,在多认知任务中的通用能力(学习能力,语言能力,感知能力,推理能力,创造力,计划,规划能力迁移学习3.理论能力和上限4.通用AI策略-I:在人工智能研究领域做长期研发投资目标:.确保美国的世界领导地位.优先投资下一代人工智能技术1.推动以数据为中心的知识发现技术2.增强系统的感知能力多AI系统的协同分布式计划和控制技术3.理论能力和上限4.通用AI5.规模化AI系统策略-I:在人工智能研究领域做长期研发投资目标:.确保美国的世界领导地位.优先投资下一代人工智能技术1.推动以数据为中心的知识发现技术2.增强系统的感知能力AI系统的自我解释能力目前AI系统的学习方法:大数据,黑盒人的学习方法:小数据,接受正规的指导规则以及各种暗示仿人的AI系统,可以做智能助理,智能辅导3.理论能力和上限4.通用AI5.规模化AI系统6.仿人类的AI技术策略-I:在人工智能研究领域做长期研发投资目标:.确保美国的世界领导地位.优先投资下一代人工智能技术1.推动以数据为中心的知识发现技术2.增强系统的感知能力提升机器人的感知能力,更智能的同复杂的物理世界交互3.理论能力和上限4.通用AI5.规模化AI系统6.仿人类的AI技术7.研发实用,可靠,易用的机器人策略-I:在人工智能研究领域做长期研发投资目标:.确保美国的世界领导地位.优先投资下一代人工智能技术1.推动以数据为中心的知识发现技术2.增强系统的感知能力提升机器人的感知能力,更智能的同复杂的物理世界交互GPU:提升的内存,输入输出,时钟速度,并行能力,节能“类神经元”处理器处理基于流式,动态数据利用AI技术提升硬件能力:高性能计算,优化能源消耗,增强计算性能,自我智能配置,优化数据在多核处理器和内存直接移动3.理论能力和上限4.通用AI5.规模化AI系统6.仿人类的AI技术7.研发实用,可靠,易用的机器人8.AI和硬件的相互推动策略-II:开发有效的人机合作方法.不是替代人,而是跟人合作,强调人和AI系统之间的互补作用1.辅助人类的人工智能技术AI系统的设计很多是为人所用复制人类计算,决策,认知策略-II:开发有效的人机合作方法.不是替代人,而是跟人合作,强调人和AI系统之间的互补作用1.辅助人类的人工智能技术2.开发增强人类的技术稳态设备穿戴设备植入设备辅助数据理解策略-II:开发有效的人机合作方法.不是替代人,而是跟人合作,强调人和AI系统之间的互补作用1.辅助人类的人工智能技术2.开发增强人类的技术数据和信息的可视化,以人可以理解的方式展现提升人和系统通信的效率3.可视化,AI-人之间的友好界面策略-II:开发有效的人机合作方法.不是替代人,而是跟人合作,强调人和AI系统之间的互补作用1.辅助人类的人工智能技术2.开发增强人类的技术已成功:安静环境下的流畅的语音识未解决的:噪声环境下的识别,远场语音识别,口音,儿童语音识别,受损语音识别,语言理解,对话能力3.可视化,AI-人之间的友好界面4.研发更有效的语言处理系统策略III:理解并重点关注人工智能可能带来的伦理,法律,社会方面的影响1.研究人工智能技术可能带来的伦理,法律,社会方面的影响2.期待其符合人的类规范1.AI系统从设计上需要符合人类的道德标准:公平,正义,透明,责任感策略III:理解并重点关注人工智能可能带来的伦理,法律,社会方面的影响1.研究人工智能技术可能带来的伦理,法律,社会方面的影响2.期待其符合人的类规范1.AI系统从设计上需要符合人类的道德标准:公平,正义,透明,责任感2.构建符合道德的AI技术如何将道德量化,由模糊变为精确的系统和算法设计道德通常是模糊的,随文化,宗教和信仰而不同策略III:理解并重点关注人工智能可能带来的伦理,法律,社会方面的影响1.研究人工智能技术可能带来的伦理,法律,社会方面的影响2.期待其符合人的类规范1.AI系统从设计上需要符合人类的道德标准:公平,正义,透明,责任感2.构建符合道德的AI技术两层架构:由一层专门负责道德建设道德标准植入每一个工程AI步骤3.符合道德标准的AI技术的实现框架策略-IV:确保人工智能系统的自身和对周围环境安全性1.在人工智能系统广泛使用之前,必须确保系统的安全性2.研究创造稳定,可依靠,可信赖,可理解,可控制的人工智能系统所面临的挑战及解决办法1.提升AI系统的可解释性和透明度2.建立信任3.增强verification和validation4.自我监控,自我诊断,自我修正5.意外处理能力,防攻击能力策略-V:发展人工智能技术所需的共享的数据集和共享的模拟环境1.一件重要的公益事业,同时要充分尊重企业和个人在数据中的权利和利益2.鼓励开源策略-VI:评价和评测人工智能技术的标准1.开发恰当的评级策略和方法策略-VII:更好的理解国家在人工智能研发方面的人力需求1.保证足够的人才资源大数据和人工智能数据是人工智能的来源大数据并行计算,流计算等技术是人工智能能实用化的保障人工智能是大数据,尤其复杂数据分析的主要方法. Top 10 家技家技术公司的布局公司的布局Google: AI-First Strategy1.Google化4亿美金购买英国伦敦大学人工智能创业公司:DeepMind2.AlphaGo3.GNC4.WaveNet5.Q-Learning2011年成立1.语音识别,合成;2.机器翻译;3.无人驾驶车.4.谷歌眼镜.5.GoogleNow.6.收购Api.uiFacebook共享深度学习开源代码:TorchFacbookM数字助理研究和应用:FAIR&AMLApple AIAppleSiriAppleboughtEmotientandVocalIQ?Partnership on AIItwill“conductresearch,recommendbestpractices,andpublishresearchunderanopenlicenseinareassuchasethics,fairnessandinclusivity;transparency,privacy,andinteroperability;collaborationbetweenpeopleandAIsystems;andthetrustworthiness,reliabilityandrobustnessofthetechnology”2016年年9月月29日日Elon Musk : OpenAIPaypal,Telsla,SpaceX,SolarCity四家公司CEO,投资十个亿美金成立OpenAIMicrosoftIBM百度国内技术巨头腾讯,阿里,讯飞在人工智能领域投入巨大5. 深度学深度学习在企在企业数据分析中的案例数据分析中的案例Anexample:AIinDataAnalyticswithDeepLearning-客户情感分析1.Introduction2.EmotionRecognitioninText3.EmotionRecognitioninSpeech4.EmotionRecognitioninConversations5.IndustrialApplicationDatasetsFeaturesMethodsIntroduction:InterchangeableTermsOpinionMiningSentimentalAnalysisEmotionRecognitionPolarityDetectionReviewMining42Introduction:Whatemotionsare?43Introduction:ProblemDefinitionWewillonlyfocusondocumentlevelsentimentOpinionMiningRANLP2015,Hissar,BulgariaIntroduction:TextExamples6thSeptember2015athrillerwithoutalotofthrillsAnedgythrillerthatdeliversasurprisingpunchAflawedbutengrossingthrillerItsunlikelywellseeabetterthrillerthisyearAneroticthrillerthatsneithertooeroticnorverythrillingeitherEmotionsareexpressedartisticallywithhelpofNegationConjunctionWordsSentimentalWords,e.g.45RANLP2015,Hissar,BulgariaIntroduction:TextExamples DSE:explicitlyexpressanopinionholdersattitude ESE:indirectlyexpresstheattitudeofthewriter6thSeptember2015Emotionsareexpressedexplicitlyandindirectly.46RANLP2015,Hissar,BulgariaIntroduction:TextExamples6thSeptember2015Emotionsareexpressedlanguagethatisoftenobscuredbysarcasm,ambiguity,andplaysonwords,allofwhichcouldbeverymisleadingforbothhumansandcomputers AsharptonguedoesnotmeanyouhaveakeenmindIdontknowwhatmakesyousodumbbutitreallyworksPlease,keeptalking.Sogreat.IalwaysyawnwhenIaminterested.47RANLP2015,Hissar,BulgariaIntroduction:SpeechConversationExamples6thSeptember201548RANLP2015,Hissar,BulgariaIntroduction:ConversationExamples6thSeptember201549RANLP2015,Hissar,BulgariaTypicalApproach:AClassificationTask6thSeptember2015ADocumentFeatures:Ngrams(Uni,bigrams)POSTagsTermFrequencySyntacticDependencyNegationTagsSVMMaxentNaveBayesCRFRandomForestPosNeuNegSupervisedLearningPos-TagPatterns+Dictionary+MutualInfoRulesUnsupervisedLearning50RANLP2015,Hissar,BulgariaTypicalApproach:AClassificationTask6thSeptember2015Features:Prosodic features:pitch,energy,formants,etc.Voice quality features:harsh,tense,breathy,etc.Spectral features:LPC,MFCC,LPCC,etc.Teager Energy Operator (TEO)-based features:TEO-FM-var,TEO-Auto-Env,etcSVMGMMHMMDBNKNNLDACARTPosNeuNegSupervisedLearning51ChallengesRemain1.Text-Based:CapturethecompositionaleffectswithhigheraccuracyNegatingPositivesentencesNegatingNegativesentencesConjunction:2.Speech-Based:Effectivefeaturesunknown.EmotionalspeechsegmentstendtobetranscribedwithlowerASRaccuracyOverview1.Introduction2.EmotionRecognitioninTextWordEmbeddingforSentimentAnalysisCNNforSentimentClassificationRNN,LSTMforsentimentClassificationPriorKnowledge+CNN/LSTMParsing+RNN3.EmotionRecognitioninSpeech4.EmotionRecognitioninConversations5.IndustrialApplicationHowdeeplearningcanchangethegame?RANLP2015,Hissar,Bulgaria6thSeptember2015EmotionClassificationwithDeeplearningapproaches54RANLP2015,Hissar,Bulgaria1.WordEmbeddingasFeatures6thSeptember2015Representationoftextisveryimportantforperformanceofmanyreal-worldapplicationsincludingemotionrecognition:Localrepresentations:N-gramsBag-of-words1-of-NcodingContinuousRepresentations:LatentSemanticAnalysisLatentDirichletAllocationDistributedRepresentations:wordembeddingTomasMikolov,“LearningRepresentationsofTextusingNeuralNetworks”,NIPsDeeplearningWorkshop2013(Bengioetal.,2006;Collobert&Weston,2008;Mnih&Hinton,2008;Turianetal.,2010;Mikolovetal.,2013a;c)55RANLP2015,Hissar,Bulgaria1.WordEmbeddingasFeatures6thSeptember2015Representationoftextisveryimportantforperformanceofmanyreal-worldapplicationsincludingemotionrecognition:Localrepresentations:N-gramsBag-of-words1-of-NcodingContinuousRepresentations:LatentSemanticAnalysisLatentDirichletAllocationDistributedRepresentations:wordembeddingTomasMikolov,“LearningRepresentationsofTextusingNeuralNetworks”,NIPsDeeplearningWorkshop201356RANLP2015,Hissar,BulgariaWordEmbedding6thSeptember2015Skip-gramArchCBOWThehiddenlayervectoristheword-embeddingvectorforw(t)57WordEmbeddingforSentimentDetectionIthasbeenwidelyacceptedasstandardfeaturesforNLPapplicationsincludingsentimentanalysissince2013Mikolov2013Thewordvectorspaceimplicitlyencodesmanylinguisticregularitiesamongwords:semanticandsyntacticExample:GooglePre-trainedwordvectorswith1000BillionwordsDoesitencodepolaritysimilarities?great0.729151bad0.719005terrific0.688912decent0.683735nice0.683609excellent0.644293fantastic0.640778better0.612073solid0.580604lousy0.576420wonderful0.572612terrible0.560204Good0.558616TopRelevantWordsto“good”MostlyYes,butitdoesntseparateantonymswellRANLP2015,Hissar,BulgariaLearningSentiment-SpecificWordEmbedding6thSeptember2015Tang,etal,“LearningSentimentSpecificWordEmbeddingforTwitterSentimentClassification”,ACL201459RANLP2015,Hissar,BulgariaLearningSentiment-SpecificWordEmbedding6thSeptember2015Tang,etal,“LearningSentimentSpecificWordEmbeddingforTwitterSentimentClassification”,ACL2014InSpirit,itissimilartomulti-tasklearning.Itlearnsthesamewayastheregularword-embeddingwithlossfunctionconsideringbothsemanticcontextandsentimentdistancetothetwitteremotionsymbols.6010milliontweetsselectedbypositiveandnegativeemoticonsastrainingdataTheTwittersentimentclassificationtrackofSemEval2013LearningSentiment-SpecificWordEmbeddingTang,etal,“LearningSentimentSpecificWordEmbeddingforTwitterSentimentClassification”,ACL2014ParagraphVectorsLeandMikolov,“DistributionalRepresentationsofSentencesandDocuments,ICML2014Paragraphvectorsaredistributionalvectorrepresentationforpiecesoftext,suchassentencesorparagraphsTheparagraphvectorsarealsoaskedtocontributetothepredictiontaskofthenextwordgivenmanycontextssampledfromtheparagraph.EachparagraphcorrespondstoonecolumninDItactsasamemoryrememberingwhatismissingfromthecurrentcontext,aboutthetopicoftheparagraphParagraphVectorsBestResultsonMRDataSetLeandMikolov,“DistributionalRepresentationsofSentencesandDocuments,ICML2014Overview1.Introduction2.EmotionRecognitioninTextWordEmbeddingforSentimentAnalysisCNNforSentimentClassificationRNN,LSTMforsentimentClassificationPriorKnowledge+CNN/LSTMDatasetCollection3.EmotionRecognitioninSpeech4.EmotionRecognitioninConversations5.IndustrialApplicationCNNforSentimentClassificationRef:YoonKim.ConvolutionalNeuralNetworksforSentenceClassification.EMNLP,2014.CNNforSentimentClassification1.Ref:YoonKim.ConvolutionalNeuralNetworksforSentenceClassification.EMNLP,2014.1.AsimpleCNNwithOneLayerofconvolutionontopofwordvectors.MotivatedbyCNNhasbeensuccessfulonmanyotherNLPtasks2.InputLayer:Wordvectorsarefrompre-trainedGoogle-Newsword2vector3.ConvLayer:Windowsize:3words,4words,5words.Eachwith100featuremap.300featuresinthepenultimatelayer4.PoolingLayer:MaxOvertimePoolingatthe5.Outputlayer:Fullyconnectedsoftmaxlayer,outputdistributionoverlabels6.Regularization:Drop-outonthepenultimatelayerwithaconstrainonthel2normsoftheweightvectors7.Fine-trainembeddingvectorsduringtrainingCommonDatasetsCNNforSentimentClassification-Results CNN-rand:Randomlyinitializeallwordembeddings CNN-static:word2vec,keeptheembeddingsfixed CNN-nonstatic:Fine-tuningembeddingvectorsCNNforSentimentClassification-ResultsWhyitissuccessful?1.MultiplefiltersandmultiplefeaturemapsEmotionsareexpressedinsegments,insteadofthespanningoverthewholesentence2.Usepre-trainedword2vecvectorsasinputfeatures.3.Embeddingwordvectorsarefurtherimprovedfornon-statictraining.Antonymsarefurtherseparatedaftertraining.ResourcesforThiswork1.SourceCode:https:/github.com/yoonkim/CNN_sentence2.ImplementationinTensorflow:https:/github.com/dennybritz/cnn-text-classification-tf3.ExtensiveExperiments:https:/arxiv.org/pdf/1510.03820v4.pdfDynamicCNNforSentimentKalchbrenneretal,“AConvolutionalNeuralNetworkforModelingSentences”,ACL2014Hyper Parameters in Experiments:K=4m=5,14featuremapsm=7,6featuremapsd=48DynamicCNNforSentimentKalchbrenneretal,“AConvolutionalNeuralNetworkforModelingSentences”,ACL2014OneDimensionConvolutionTwoDimensionConvolution48Dwordvectorsrandomlyinitiated300DInitiatedwithGoogleword2vectorMorecomplicatedmodelarchitecturewithdynamicpoolingStraightForward6,4featuremaps100-128featuremapsJohnsonandZhang.,“EffectiveUseofWordOrderforTextCategorizationwithConvolutionalNeuralNetworks”,ACL-2015WhyCNNiseffectiveAsimpleremedyistousewordbi-gramsinadditiontounigramsIthasbeennotedthatlossofwordordercausedbybag-of-wordvectors(bowvectors)isparticularlyproblematiconsentimentclassificationComparingSVMwithTri-gramfeatureswith1,2,3windowfilterCNNTop100FeaturesSVMCNNUni-Grams687Bi-Grams2833Tri-Grams460SVMscantfullytakeadvantageofhigh-orderngramsSentimentClassificationConsideringFeaturesbeyondTextwithCNNModelsTangetal.,“LearningSemanticRepresentationsofUsersandProductsforDocumentLevelSentimentClassification“”,ACL-2015Overview1.Introduction2.EmotionRecognitioninTextWordEmbeddingforSentimentAnalysisCNNforSentimentClassificationRNN,LSTMforsentimentClassificationPriorKnowledge+CNN/LSTMDatasetCollection3.EmotionRecognitioninSpeech4.EmotionRecognitioninConversations5.IndustrialApplicationRecursiveNeuralTensorNetworkSocheretal.,“RecursiveDeepModelsforSemanticCompositionalityoveraSentimentTreebank”,EMNLP-2013.http:/nlp.stanford.edu/sentiment/1.TheStanfordSentimentTreebackisacorpuswithfullylabeledparsetrees2.Createdtofacilitateanalysisofthecompositionaleffectsofsentimentinlanguage3.10,662sentencesfrommoviereviews.Parsedbystanfordparser.215,154phrasesarelabeled4.AmodelcalledRecursiveNeuralTensorNetworkswasproposedRecursiveNeuralTensorNetwork-DistributionofsentimentvaluesforN-gramsSocheretal.,“RecursiveDeepModelsforSemanticCompositionalityoveraSentimentTreebank”,EMNLP-2013.http:/nlp.stanford.edu/sentiment/StrongersentimentoftenbuildsupinlongerphrasesandthemajorityoftheshorterphrasesareneutralRecursiveNeuralTensorNetwork(RNTN)Socheretal.,“RecursiveDeepModelsforSemanticCompositionalityoveraSentimentTreebank”,EMNLP-2013.http:/nlp.stanford.edu/sentiment/f=tanhVisthetensordirectlyrelateinputvectors,WistheregularRNNweightmatrixWangetal.,“PredictingPolaritiesofTweetsbyComposingWordEmbeddingwithLongShort-TermMemory”,ACL-2015LSTMforSentimentAnalysisLSTMworkstremendouslywellonalargenumberofproblemsSucharchitecturesaremorecapabletolearnacomplexcompositionsuchasnegationofwordvectorsthansimpleRNNs.Input,storedinformation,andoutputarecontrolledbythreegates.Wangetal.,“PredictingPolaritiesofTweetsbyComposingWordEmbeddingwithLongShort-TermMemory”,ACL-2015LSTMforSentimentAnalysisDataset:theStanfordTwitterSentimentcorpus(STS)LSTM-TLT:Word-embeddingvectorsasinput.TLT:TrainableLook-upTableItisobservedthatnegationscanbebettercaptured.Tangetal.,“DocumentModelingwithGatedRecurrentNeuralNetworkforSentimentClassification”,EMNLP-2015GatedRecurrentUnitTangetal.,“DocumentModelingwithGatedRecurrentNeuralNetworkforSentimentClassification”,EMNLP-2015GatedRecurrentNeuralNetworkUseCNN/LSTMtogeneratelsentencerepresentationsfromwordvectorsGateRecurrentNeuralNetwork(GRU)toencodesentencerelationsforsentimentclassificationGRUcanviewedasvariantofLSTM,withoutputgatealwaysonTangetal.,“DocumentModelingwithGatedRecurrentNeuralNetworkforSentimentClassification”,EMNLP-2015GatedRecurrentNeuralNetworkJ.Wangetal.,DimensionalSentimentAnalysisUsingaRegionalCNN-LSTMModel”,ACL-2016CNN-LSTMJ.Wangetal.,DimensionalSentimentAnalysisUsingaRegionalCNN-LSTMModel”,ACL-2016CNN-LSTMThedimensionalapproachrepresentsemotionalstatesascontinuousnumericalvaluesinmultipledimensionssuchasthevalence-arousal(VA)space(Russell,1980).Thedimensionofvalencereferstothedegreeofpositiveandnegativesentiment,whereasthedimensionofarousalreferstothedegreeofcalmandexcitementK.STaietal,ImprovedSemanticRepresentationsFromTree-StructuredLongShort-TermMemoryNetworks”,ACL-2015Tree-LSTMTree-LSTM:ageneralizationofLSTMstotree-structurednetworktopologies.TreeLSTMsoutperformallexistingsystemsandstrongLSTMbaselinesontwotasks:predictingthesemanticrelatednessoftwosentences(SemEval2014,Task1)andsentimentclassification(StanfordSentimentTreebank).K.STaietal,ImprovedSemanticRepresentationsFromTree-StructuredLongShort-TermMemoryNetworks”,ACL-2015Tree-LSTMAchievecomparableaccuracyConstituency-TreebasedperformsbetterThewordvectorsareinitializedbyGloveWord2Vectors(Trainedon840billiontokensofCommonCrawldata,http:/nlp.stanford.edu/projects/glove/)Overview1.Introduction2.EmotionRecognitioninTextWordEmbeddingforSentimentAnalysisCNNforSentimentClassificationRNN,LSTMforsentimentClassificationPriorKnowledge+CNN/LSTMDatasetCollection3.EmotionRecognitioninSpeech4.EmotionRecognitioninConversations5.IndustrialApplicationRANLP2015,Hissar,BulgariaPriorKnowledge+DeepNeuralNetworks6thSeptember2015Foreachiteration:1.Theteachernetworkisobtainedbyprojectingthestudentnetworktoarule-regularizedsubspace(reddashedarrow);2.Thestudentnetworkisupdatedtobalancebetweenemulatingtheteachersoutputandpredictingthetruelabels(black/bluesolidarrows).Huetal,”HarnessingDeepNeuralNetworkswithLogicRules”,ACL-2016Thisprocessisagnosticthestudentnetwork,applicabletoanyarchitecture:RNN/DNN/CNN90RANLP2015,Hissar,BulgariaPriorKnowledge+DeepNeuralNetworks6thSeptember2015Huetal,”HarnessingDeepNeuralNetworkswithLogicRules”,ACL-2016TheTeacherNetworkiscreatedeachiterationbasedontwocriteria:1closeenoughtothestudentnetwork2reflectallrules91Prior Knowledge + Deep Neural NetworksAccuracyonSST2Datasets,-Rule-qistheteachernetworkOnedifficultyfortheplainneuralnetworkistoidentifycontrastivesenseinordertocapturethedominantsentimentprecisely.PriorKnowledgeinExperiment:“AButB”,theoverallsentimentisconsistentwiththesentimentofBOverview1.Introduction2.EmotionRecognitioninTextWordEmbeddingforSentimentAnalysisCNNforSentimentClassificationRNN,LSTMforsentimentClassificationPriorKnowledge+CNN/LSTMDatasetCollection3.EmotionRecognitioninSpeech4.EmotionRecognitioninConversations5.IndustrialApplicationTextCorpusforSentimentAnalysisMR:Moviereviewswithonesentenceperreview.Classificationinvolvesdetectingpositive/negativereviews. https:/www.cs.cornell.edu/people/pabo/movie-review-data/SST:StanfordSentimentTreebankanextensionofMRbutwithtrain/dev/testsplitsprovidedandfine-grainedlabels(verypositive,positive,neutral,negative,verynegative),re-labeledbySocheretal.(2013)http:/nlp.stanford.edu/sentiment/CR:Customerreviewsofvariousproducts(cameras,MP3setc.).Taskistopredictpositive/negativereviews(HuandLiu,2004).http:/www.cs.uic.edu/liub/FBS/sentiment-analysis.htmlMPQA:OpinionpolaritydetectionsubtaskoftheMPQAdataset(Wiebeetal.,2005)http:/www.cs.pitt.edu/mpqa/YelpDatasetChallengein2013and2014http:/www.yelp.com/dataset_challengeIMDB:TheratingscaleofIMDBdatasetis1-10http:/www.imdb.com/TextCorpusforSentimentAnalysisChineseTextCorpusforSentimentAnalysisNewsandblogpostswithEkmanemotions(Wang,2014)Ren-CECps blog emotion corpus (Quan & Ren, 2009)Thesentencesareannotatedwitheightemotions:joy,expectation,love,surprise,anxiety,sorrow,anger,andhate.2013ChineseMicroblogSentimentAnalysisEvaluation(CMSAE)DatasetofpostsfromSinaWeiboannotatedwithsevenemotions:anger,disgust,fear,happiness,like,sadnessandsurprise.Thetrainset:4000instances(13252sentences)Thetestset:10000instances(32185sentences) http:/tcci.ccf.org.cn/conference/2013/pages/page04eva.htmlChineseValence-ArousalTexts(CVAT)Liang-ChihYu.2016.BuildingChineseAffectiveResourcesinValence-ArousalDimensions.(NAACL/HLT-16).SaifM.Mohammad,“ComputationalAnalysisofAffectandEmotioninLanguage”,EMNLP2015Manuallycreatedlexicalresources Dictionary of Affect (Whissell) http:/sail.usc.edu/dal_app.php Affective Norms for English Words (Texts) (Bradley & Lang) http:/csea.phhp.ufl.edu/media.html Harvard General Inquirer categories (Stone etc.) http:/www.wjh.harvard.edu/inquirer/ NRC Emotion Lexicon (Mohammad & Turney) http:/saifmohammad.com/WebPages/lexicons.html MaxDiff Sentiment Lexicon (Kiritchenko, Zhu, & Mohammad) http:/saifmohammad.com/WebPages/lexicons.htmlSaifM.Mohammad,“ComputationalAnalysisofAffectandEmotioninLanguage”,EMNLP2015Sharedtasksatthesentencelevel SemEval-2007:AffectiveTexthttp:/nlp.cs.swarthmore.edu/semeval/tasks/task14/summary.shtml SemEval-2013,2014,2015:SentimentAnalysisinTwitter https:/www.cs.york.ac.uk/semeval-2013/task2/ http:/alt.qcri.org/semeval2014/task9/ http:/alt.qcri.org/semeval2015/task10/ SemEval-2015:SentimentAnalysisofFigurativeLanguageinTwitter http:/alt.qcri.org/semeval2015/task11/ KaggleCompetition:SentimentAnalysisonMoviereviews https:/www.kaggle.com/c/sentiment-analysis-on-movie-reviewsSaifM.Mohammad,“ComputationalAnalysisofAffectandEmotioninLanguage”,EMNLP2015OtherResources:Affectcorpora Affective Text Dataset (Strapparava & Mihalcea) news; headlineshttp:/web.eecs.umich.edu/mihalcea/downloads.html#affective Affect Dataset (Alm) classic literary tales; sentenceshttp:/people.rc.rit.edu/coagla/ 2012 US Presidential Elections tweets (Mohammad et al.)http:/saifmohammad.com/WebDocs/ElectoralTweetsData.zip Emotional Prosody Speech and Transcripts actors/numbers(Libermanetal.)https:/catalog.ldc.upenn.edu/LDC2002S28 HUMAINE multimodal (Douglas-Cowie et al.)http:/emotion-research.net/download/pilot-db/ Other: EmotionML (Schrder et al.) http:/www.w3.org/TR/emotionml/ ACII (multiple data formats), Interspeech (spoken language) IEEE Trans. on Affective Comp. http:/www.computer.org/web/tacSaifM.Mohammad,“ComputationalAnalysisofAffectandEmotioninLanguage”,EMNLP2015Overview1.Introduction2.EmotionRecognitioninText3.EmotionRecognitioninSpeechThecommonframeworkDNNforspeechemotionrecognitionRNNforspeechemotionrecognitionCNNforspeechemotionrecognitionDatacollectionforspeechemotionrecognition4.EmotionRecognitioninConversations5.IndustrialApplicationTheCommonFrameworkStep1:SegmentLevelStep2:UtteranceLevelClassifierCNNDNN/LSTMRNN/ELMThecommonfeatures1.Framefeatureset:Framelength:25ms,with10msslidingSegmentlength:265ms,enoughtoexpressemotionINTERSPEECH2009EmotionChallengeFeatureset:12MFCC;F0,root-mean-squaresignalframeenergy;zero-crossingrateoftimesignalandthevoicingprobabilitycomputedfromtheACF?.1storderderivatives;2.acousticfeatures:Segmentlength:250ms;stackframefeaturesClassifier,DistributionofemotionstatesOverview1.Introduction2.EmotionRecognitioninText3.EmotionRecognitioninSpeechThecommonframeworkDNNforspeechemotionrecognitionRNNforspeechemotionrecognitionCNNforspeechemotionrecognitionDatacollectionforspeechemotionrecognition4.EmotionRecognitioninConversations5.IndustrialApplicationDBN+iVectorRuiXiaandYangliu,DBN-ivectorFrameworkforAcousticEmotionRecognition”,Interspeech2016DNN+ELM1.Frame-levelfeatures:30commonacousticfeatures2.Segment-levelfeatures:Stacksoflowlevelframe-basedfeatures.DNNasaclassifiertoseparatepositiveandnegative3.Utterance-levelfeatures:statisticsofthesegment-levelprobabilities,maximal,minimalandmeanofsegment-levelprobabilityofthekthemotionovertheutterance,thepercentageofsegmentswhichhavehighprobabilityofemotionkK.Han,DYuandI.Tashev,“SpeechEmotionRecognitionUsingDeepNeuralNetworkandExtremeLearningMachine“,Interspeech2014DNN+ELMFrameLevel:1.InputLayer:750Units(25frames,30LLDfeaturesperframe)2.HiddenLayers:3layers;256Reluneuronsperlayer;3.OutputLayer:5emotions(excitement,frustration,happiness,neutralandsurprise)4.Training:Mini-batchgradientdescendmethod,cross-entropyastheobjectivefunctionK.Han,DYuandI.Tashev,“SpeechEmotionRecognitionUsingDeepNeuralNetworkandExtremeLearningMachine“,Interspeech2014DNN+ELMUtteranceLevel:Extremelearningmachine1.InputLayer:4statisticsx5emotions2.HiddenLayers:1layer;1203.OutputLayer:5emotions(excitement,frustration,happiness,neutralandsurprise)4.Training:SuperFastK.Han,DYuandI.Tashev,“SpeechEmotionRecognitionUsingDeepNeuralNetworkandExtremeLearningMachine“,Interspeech2014DNN+ELMK.Han,DYuandI.Tashev,“SpeechEmotionRecognitionUsingDeepNeuralNetworkandExtremeLearningMachine“,Interspeech2014EmotionalDyadicMotionCapture(IEMOCAP)databasereferredincommentstoevaluateourapproach.Thedatabasecontainsaudiovisualdatafrom10actors,andonlyaudiotrackforourevaluationOverview1.Introduction2.EmotionRecognitioninText3.EmotionRecognitioninSpeechThecommonframeworkDNNforspeechemotionrecognitionRNNforspeechemotionrecognitionCNNforspeechemotionrecognition4.EmotionRecognitioninConversations5.IndustrialApplicationRNN-LSTM1.ApplyLSTMtoreplaceDNNinthepreviouswork2.Changesegmentselectionstrategy:randomlyassigna“NULL”emotiontonon-silentsegments,insteadofusingsegmentswithhighestenergy3.Motivation:DNN:Assumethecontextualeffectcanbecoveredbyalong,150-250msRNN-LSTM:capableofmodelinglongandvariablecontexteffectJinkyuLeeandIvanTashev,“High-levelFeatureRepresentationusingRecurrentNeuralNetworkforSpeechEmotionRecognition“,Interspeech2015RNN-LSTMJinkyuLeeandIvanTashev,“High-levelFeatureRepresentationusingRecurrentNeuralNetworkforSpeechEmotionRecognition“,Interspeech2015Overview1.Introduction2.EmotionRecognitioninText3.EmotionRecognitioninSpeechThecommonframeworkDNNforspeechemotionrecognitionRNNforspeechemotionrecognitionCNNforspeechemotionrecognitionDatacollectionforspeechemotionrecognition4.EmotionRecognitioninConversations5.IndustrialApplicationCNNExtractAffect-EfficientFeaturesforSERZ.Huanget.al,“SpeechEmotionRecognitionusingCNN“,MM14Hand-TunedFeaturesApplyCNNtoautomaticallyselectaffect-salientfeaturesdisentanglingemotionsfromotherfactorssuchasspeakersandnoiseCNNExtractAffect-EfficientFeaturesforSERZ.Huanget.al,“SpeechEmotionRecognitionusingCNN“,MM14Theinputisspectrogramwithtwodifferentresolutions.Throughunsupervisedfeaturelearning,oursystemobtainsonelongfeaturevectory=F(x),basedonwhich,thesemi-supervisedfeaturelearningproducestheaffect-salientfeatures(e)andthenuisancefeatures(o).Finally,affect-salientfeaturesarefedtoalinearSVMforSERCNNSemi-CNNZ.Huanget.al,“SpeechEmotionRecognitionusingCNN“,MM141.UnsupervisedLearning2.Semi-supervisedLearningCNNSemi-CNNZ.Huanget.al,“SpeechEmotionRecognitionusingCNN“,MM141.Data:Differentlanguages:SurreyAudio-VisualExpressedEmotion(SAVEE)Database,BerlinEmotionalDatabase(Emo-DB),DanishEmotionalSpeechdatabase(DES),MandarinEmotionalSpeechdatabase(MES)2.ResultswithDifferentFeatures:spectrogramrepresentation(“RAW”features)acousticfeaturesTeagerEnergyOperator(TEO)LocalInvariantFeatures(LIF)Withandwithoutaffect-salientpenaltyWithandwithoutorthogonalitypenaltyCNNResultsZ.Huanget.al,“SpeechEmotionRecognitionusingCNN“,MM14End-to-EndSERwithCNN+LSTMG.Trigeogiset.al,ADIEUFEATURES?END-TO-ENDSPEECHEMOTIONRECOGNITIONUSINGADEEPCONVOLUTIONALRECURRENTNETWOR”,ICASSP2016Overview1.Introduction2.EmotionRecognitioninText3.EmotionRecognitioninSpeechThecommonframeworkDNNforspeechemotionrecognitionRNNforspeechemotionrecognitionCNNforspeechemotionrecognitionDatacollectionforspeechemotionrecognition4.EmotionRecognitioninConversations5.IndustrialApplicationDatasets-1Datasets-2Datasets-3Chung-HsienWu,“Emotionandmentalstaterecognition:features,models,system,applicationsandbeyond”,ISCSLP2014Overview1.Introduction2.EmotionRecognitioninText3.EmotionRecognitioninSpeech4.EmotionRecognitioninReal-LifeConversationsCustomercaredialog5.IndustrialApplicationReal-Life Conversations “NEGATIVEEMOTIONSDETECTIONASANINDICATOROFDIALOGSQUALITYINCALLCENTERS”,C.Vaudable,L.Devillers,ICASSP2012NotwidelyresearchedEmotionsaremuchmoreshadedinthecorpusVoxfactory(PowerSupplyCompanyCallCenterRecordinginFrance)thanemotionintheprototypicalcorpusJEMO(“aportrayedemotioncorpus”)UnsatisfiedCustomerDetectionwithDeepLearning P.Congetal.,“UnsatisfiedCustomerCallDetectionwithDeepLearning”,ISCSLP-2016Asampleddatafromacallcenter.LabelsareprovidedbytheusersChallenges:1.Lowsamplingrate6K2.Userscallunderunpredictablebackgroundnoiseandwithdifferentlevelsofaccent3.Oftenovertalk4.NegativesegmentsarerareUnsatisfiedCustomerDetectionwithDeepLearning P.Congetal.,“UnsatisfiedCustomerCallDetectionwithDeepLearning”,ISCSLP-2016Overview1.Introduction2.EmotionRecognitioninText3.EmotionRecognitioninSpeech4.EmotionRecognitioninConversationsCustomercaredialog5.IndustrialApplicationsChat robots Detectnegative/positiveandgiveproperresponsePublic Opinion PollingSurveysofpublicopinionsarewidelyusedinindustry,government,researchOwenRambow,“SentimentandBelief:HowtoThinkabout,Represent,andAnnotatePrivateStates”,ACL2015129StockMarketPrediction DailyLiveStockMarketPredictionandTrackingusingTwitterSentimentFuruWei,“SentimentAnalysisandOpinionMining”,TechReportApplication-1Applicationsviableinthenearfutureandrelatedtotheaudio-visualemotionrecognition:uAffectiveRobotuAffectiveGamesuIntelligentClassroomuIntelligentHomeuMore applications Product review mining:Whatfeaturesofaproductdocustomerslikeandwhichdotheydislike? Review classification:Isareviewpositiveornegativetowardthemovie? Tracking sentiments toward topics over time:Isangerratchetinguporcoolingdown? Prediction (election outcomes, market trends):WillClintonorCruzwin?OwenRambow,“SentimentandBelief:HowtoThinkabout,Represent,andAnnotatePrivateStates”,ACL2015Trend from paper numberdatafromInterspeech&ICASSP,目前是目前是2015、2016两年,如果两年,如果这个数据有用的个数据有用的话就接着就接着统计2011到到2014年的年的?1,只看语音情感的话,数量是在下降下降2、为了表现热度提升应该是把多模多模态交互的论文放在其中SentimentanalysisPapercounts SentimentanalysispapercountinACLfrom2011to2016ReferencelTomasMikolov,“LearningRepresentationsofTextusingNeuralNetworks”,NIPsDeeplearningWorkshop2013lTomasMikolov,“LearningRepresentationsofTextusingNeuralNetworks”,NIPsDeeplearningWorkshop2013lTang,etal,“LearningSentimentSpecificWordEmbeddingforTwitterSentimentClassification”,ACL2014lLeandMikolov,“DistributionalRepresentationsofSentencesandDocuments”,ICML2014lYoonKim,“ConvolutionalNeuralNetworksforSentenceClassification”,EMNLP2014.lKalchbrenneretal,“AConvolutionalNeuralNetworkforModelingSentences”,ACL2014lJohnsonandZhang.,“EffectiveUseofWordOrderforTextCategorizationwithConvolutionalNeuralNetworks”,ACL2015lTangetal.,“LearningSemanticRepresentationsofUsersandProductsforDocumentLevelSentimentClassification“,ACL2015lSocheretal.,“RecursiveDeepModelsforSemanticCompositionalityoveraSentimentTreebank”,EMNLP2013.lWangetal.,“PredictingPolaritiesofTweetsbyComposingWordEmbeddingwithLongShort-TermMemory”,ACL2015lTangetal.,“DocumentModelingwithGatedRecurrentNeuralNetworkforSentimentClassification”,EMNLP2015lJ.Wangetal.,DimensionalSentimentAnalysisUsingaRegionalCNN-LSTMModel”,ACL2016lK.STaietal,ImprovedSemanticRepresentationsFromTree-StructuredLongShort-TermMemoryNetworks”,ACL2015ReferencelSaifM.Mohammad,“ComputationalAnalysisofAffectandEmotioninLanguage”,EMNLP2015演讲完毕,谢谢观看!
收藏 下载该资源
网站客服QQ:2055934822
金锄头文库版权所有
经营许可证:蜀ICP备13022795号 | 川公网安备 51140202000112号