Pattern classification = 模式分类 / 2nd ed.
副标题:无
作 者:Richard O. Duda, Peter E. Hart, David G. Stork著.
分类号:O235
ISBN:9787111136873
微信扫一扫,移动浏览光盘
简介
[style type="text/css"]
[!--
.unnamed1 { font-size: 14px}
--]
[/style]
“本书的第1版是模式识别领域的奠基性著作。而今,stork博士又从近年这一领域的最新成果中精选出重要的内容,对模式识别领域的发展进行了新的总结,并指明了对未来30年至关重要的问题。本书简明易读,新增的图表使得许多统计和数学题材非常生动,最终以完美和谐的形式,引导读者深入各种新的主题。”
——sargur n.srihari博士,纽约州立大学布法罗分校计算机科学与工程学教授开发和研究模式识别系统的实践者,无论其应用涉及语音识别,字符识别。图像处理还是信号分析,常会遇到需要从大量令人迷惑的技术中做出选择的难题。这本独一无二的教材及专业参考书,为你准备了充足的资料和信息,帮你选择最适合的技术。作为几十年内模式识别领域经典著作的新版,这一版本更新并扩充了原作,重点介绍模式分类及该领域近年来的巨大进展。本书已被卡内基-梅隆、哈佛、斯坦福、剑桥等120多所大学采纳为教材。
清晰地阐明了模式识别的经典方法和新方法,包括神经网络、随机方法、遗传算法以及机器学习理论。
提供了大量双色图表,用于突出展示各种概念。
收录了大量实用的例题。
采用伪代码形式的模式识别算法。
扩充了对正文有关键意义的习题和计算机练习。
用算法形式讲解特殊的模式识别和机器学习技术。
每章后面均附有文献历史评述以及重要的参考文献。
附录补充了必要的数学基础知识。
[table width="191" border="0" align="center" height="124"]
[tr]
[td]
[div align="center"][a href="http://www.china-pub.com/computers/common/info.asp?id=14573" target="_blank"][img src="http://www.china-pub.com/computers/ebook10000-15000/14573/cover.gif" width="78" height="110" border="0"][/a]
[span class="unnamed1"] [a href="http://www.china-pub.com/computers/common/info.asp?id=14573" target="_blank"]模式分类(原书第2版)[/a][/span]
[/div]
[/td]
[/tr]
[/table]
目录
PREFACE1INTRODUCTION1.1MachinePerception,11.2AnExample,11.2.1RelatedFields,81.3PatternRecognitionSystems,91.3.1Sensing,91.3.2SegmentationandGrouping,91.3.3FeatureExtraction,111.3.4Classification,121.3.5PostProcessing,131.4TheDesignCycle,141.4.1DataCollection,141.4.2FeatureChoice,141.4.3ModelChoice,151.4.4Training,151.4.5Evaluation,151.4.6ComputationalComplexity,161.5LearningandAdaptation,161.5.1SupervisedLearning,161.5.2UnsupervisedLearning,171.5.3ReinforcementLearning,171.6Conclusion,17SummarybyChapters,17BibliographicalandHistoricalRemarks,18Bibliography,192BAYESIANDECISIONTHEORY2.1Introduction,202.2BayesianDecisionTheory--ContinuousFeatures,242.2.1Two-CategoryClassification,252.3Minimum-Error-RateClassification,262.3.1MinimaxCriterion,27*2.3.2Neyman-PearsonCriterion,282.4Classifiers,DiscriminantFunctions,andDecisionSurfaces,292.4.1TheMulticategoryCase,292.4.2TheTwo-CategoryCase,302.5TheNormalDensity,312.5.1UnivariateDensity,322.5.2MultivariateDensity,332.6DiscriminantFunctionsfortheNormalDensity,36*2.7ErrorProbabilitiesandIntegrals,45*2.8ErrorBoundsforNormalDensities,462.8.1ChernoffBound,462.8.2BhattacharyyaBound,47Example2ErrorBoundsforGaussianDistributions,482.8.3SignalDetectionTheoryandOperatingCharacteristics,482.9BayesDecisionTheory--DiscreteFeatures,512.9.1IndependentBinaryFeatures,52Example3BayesianDecisionsforThree-DimensionalBinaryData,53'2.10MissingandNoisyFeatures,542.10.1MissingFeatures,542.10.2NoisyFeatures,55*2.11BayesianBeliefNetworks,56Example4BeliefNetworkforFish,592.12CompoundBayesianDecisionTheoryandContext,62Summary,63BibliographicalandHistoricalRemarks,64Problems,65Computerexercises,80Bibliography,82MAXIMUM-LIKELIHOODANDBAYESIAN3PARAMETERESTIMATION3.1Introduction,843.2Maximum-LikelihoodEstimation,853.2.1TheGeneralPrinciple,853.2.2TheGaussianCase:Unknown,883.2.3TheGaussianCase:Unknownand,883.2.4Bias,893.3BayesianEstimation,903.3.1TheClass-ConditionalDensities,913.3.2TheParameterDistribution,913.4BayesianParameterEstimation:GaussianCase,923.4.1TheUnivariateCase:p(D),923.4.2TheUnivariateCase:p(x|D),953.4.3TheMultivariateCase,953.5BayesianParameterEstimation:GeneralTheory,97Example1RecursiveBayes'Leaming,983.5.1WhenDoMaximum-LikelihoodandBayesMethodsDiffer?,1003.5.2NoninformativePriorsandInvariance,1013.5.3GibbsAlgorithm,102*3.6SufficientStatistics,1023.6.1SufficientStatisticsandtheExponentialFamily,1063.7ProblemsofDimensionality,1073.7.1Accuracy,Dimension,andTrainingSampleSize,1073.7.2ComputationalComplexity,1113.7.30verfitting,113*3.8ComponentAnalysisandDiscriminants,1143.8.1PrincipalComponentAnalysis(PCA),1153.8.2FisherLinearDiscriminant,1173.8.3MultipleDiscriminantAnalysis,121*3.9Expectation-Maximization(EM),124Example2Expectation-Maximizationfora2DNormalModel,1263.10HiddenMarkovModels,1283.10.1First-OrderMarkovModels,1283.10.2First-OrderHiddenMarkovModels,1293.10.3HiddenMarkovModelComputation,1293.10.4Evaluation,131Example3HiddenMarkovModel,1333.10.5Decoding,135Example4HMMDecoding,1363.10.6Learning,137Summary,139BibliographicalandHistoricalRemarks,139Problems,140Computerexercises,155Bibliography,1594NONPARAMETRICTECHNIQUES1614.1Introduction,1614.2DensityEstimation,1614.3ParzenWindows,1644.3.1ConvergenceoftheMean,1674.3.2ConvergenceoftheVariance,1674.3.3Illustrations,1684.3.4ClassificationExample,1684.3.5ProbabilisticNeuralNetworks(PNNs),1724.3.6ChoosingtheWindowFunction,1744.4kn-Nearest-NeighborEstimation,1744.4.1kn-Nearest-NeighborandParzen-WindowEstimation,1764.4.2EstimationofAPosterioriProbabilities,1774.5TheNearest-NeighborRule,1774.5.1ConvergenceoftheNearestNeighbor,1794.5.2ErrorRatefortheNearest-NeighborRule,1804.5.3ErrorBounds,1804.5.4Thek-Nearest-NeighborRule,1824.5.5ComputationalComplexityofthek-Nearest-NeighborRule,1844.6MetricsandNearest-NeighborClassification,1874.6.1PropertiesofMetrics,1874.6.2TangentDistance,188*4.7FuzzyClassification,192*4.8ReducedCoulombEnergyNetworks,1954.9ApproximationsbySeriesExpansions,197Summary,199BibliographicalandHistoricalRemarks,200Problems,201Computerexercises,209Bibliography,2135LINEARDISCRIMINANTFUNCTIONS2155.1Introduction,2155.2LinearDiscriminantFunctionsandDecisionSurfaces,2165.2.1TheTwo-CategoryCase,2165.2.2TheMulticategoryCase,2185.3GeneralizedLinearDiscriminantFunctions,2195.4TheTwo-CategoryLinearlySeparableCase,2235.4.1GeometryandTerminology,2245.4.2GradientDescentProcedures,2245.5MinimizingthePerceptronCriterionFunction,2275.5.1ThePerceptronCriterionFunction,2275.5.2ConvergenceProofforSingle-SampleCorrection,2295.5.3SomeDirectGeneralizations,2325.6RelaxationProcedures,2355.6.1TheDescentAlgorithm,2355.6.2ConvergenceProof,2375.7NonseparableBehavior,2385.8MinimumSquared-ErrorProcedures,2395.8.1MinimumSquared-ErrorandthePseudoinverse,240Example1ConstructingaLinearClassifierbyMatrixPseudoinverse,2415.8.2RelationtoFisher'sLinearDiscriminant,2425.8.3AsymptoticApproximationtoanOptimalDiscriminant,2435.8.4TheWidrow-HofforLMSProcedure,2455.8.5StochasticApproximationMethods,2465.9TheHo-KashyapProcedures,2495.9.1TheDescentProcedure,2505.9.2ConvergenceProof,2515.9.3NonseparableBehavior,2535.9.4SomeRelatedProcedures,253'5.10LinearProgrammingAlgorithms,2565.10.1LinearProgramming,2565.10.2TheLinearlySeparableCase,2575.10.3MinimizingthePerceptronCriterionFunction,258*5.11SupportVectorMachines,2595.11.1SVMTraining,263Example2SVMfortheXORProblem,2645.12MulticategoryGeneralizations,2655.12.1Kesler'sConstruction,2665.12.2ConvergenceoftheFixed-IncrementRule,2665.12.3GeneralizationsforMSEProcedures,268Summary,269BibliographicalandHistoricalRemarks,270Problems,271Computerexercises,278Bibliography,2816MULTILAYERNEURALNETWORKS2826.1Introduction,2826.2FeedforwardOperationandClassification,2846.2.1GeneralFeedforwardOperation,2866.2.2ExpressivePowerofMultilayerNetworks,2876.3BackpropagationAlgorithm,2886.3.1NetworkLearning,2896.3.2TrainingProtocols,2936.3.3LearningCurves,2956.4ErrorSurfaces,2966.4.1SomeSmallNetworks,2966.4.2TheExclusive-OR(XOR),2986.4.3LargerNetworks,2986.4.4HowImportantAreMultipleMinima?,2996.5BackpropagationasFeatureMapping,2996.5.1RepresentationsattheHiddenLayerWeights,3026.6Backpropagation,BayesTheoryandProbability,3036.6.1BayesDiscriminantsandNeuralNetworks,3036.6.2OutputsasProbabilities,304*6.7RelatedStatisticalTechniques,3056.8PracticalTechniquesforImprovingBackpropagation,3066.8.1ActivationFunction,3076.8.2ParametersfortheSigmoid,3086.8.3ScalingInput,3086.8.4TargetValues,3096.8.5TrainingwithNoise,3106.8.6ManufacturingData,3106.8.7NumberofHiddenUnits,3106.8.8InitializingWeights,3116.8.9LearningRates,3126.8.10Momentum,3136.8.11WeightDecay,3146.8.12Hints,3156.8.13On-Line,StochasticorBatchTraining?,3166.8.14StoppedTraining,3166.8.15NumberofHiddenLayers,3176.8.16CriterionFunction,318*6.9Second-OrderMethods,3186.9.1HessianMatrix,3186.9.2Newton'sMethod,3196.9.3Quickprop,3206.9.4ConjugateGradientDescent,321Example1ConjugateGradientDescent,322*6.10AdditionalNetworksandTrainingMethods,3246.10.1RadialBasisFunctionNetworks(RBFs),3246.10.2SpecialBases,3256.10.3MatchedFilters,3256.10.4ConvolutionalNetworks,3266.10.5RecurrentNetworks,3286.10.6Cascade-Correlation,3296.11Regularization,ComplexityAdjustmentandPruning,330Summary,333BibliographicalandHistoricalRemarks,333Problems,335Computerexercises,343Bibliography,3477STOCHASTICMETHODS3507.1Introduction,3507.2StochasticSearch,3517.2.1SimulatedAnnealing,3517.2.2TheBoltzmannFactor,3527.2.3DeterministicSimulatedAnnealing,3577.3BoltzmannLearning,3607.3.1StochasticBoltzmannLearningofVisibleStates,3607.3.2MissingFeaturesandCategoryConstraints,3657.3.3DeterministicBoltzmannLearning,3667.3.4InitializationandSettingParameters,367*7.4BoltzmannNetworksandGraphicalModels,3707.4.1OtherGraphicalModels,372*7.5EvolutionaryMethods,3737.5.1GeneticAlgorithms,3737.5.2FurtherHeuristics,3777.5.3WhyDoTheyWork?,378*7.6GeneticProgramming,378Summary,381BibliographicalandHistoricalRemarks,381Problems,383Computerexercises,388Bibliography,3918NONMETRICMETHODS3948.1Introduction,3948.2DecisionTrees,3958.3CART,3968.3.1NumberofSplits,3978.3.2QuerySelectionandNodeImpurity,3988.3.3WhentoStopSplitting,4028.3.4Pruning,4038.3.5AssignmentofLeafNodeLabels,404Example1ASimpleTree,4048.3.6ComputationalComplexity,4068.3.7FeatureChoice,4078.3.8MultivariateDecisionTrees,4088.3.9PriorsandCosts,4098.3.10MissingAttributes,409Example2SurrogateSplitsandMissingAttributes,4108.4OtherTreeMethods,4118.4.1ID3,4118.4.2C4.5,4118.4.3WhichTreeClassifierIsBest?,412*8.5RecognitionwithStrings,4138.5.1StringMatching,4i58.5.2EditDistance,4188.5.3ComputationalComplexity,4208.5.4StringMatchingwithErrors,4208.5.5StringMatchingwiththe"Don't-Care"Symbol,4218.6GrammaticalMethods,4218.6.1Grammars,4228.6.2TypesofStringGrammars,424Example3AGrammarforPronouncingNumbers,4258.6.3RecognitionUsingGrammars,4268.7GrammaticalInference,429Example4GrammaticalInference,431*8.8Rule-BasedMethods,4318.8.1LearningRules,433Summary,434BibliographicalandHistoricalRemarks,435Problems,437Computerexercises,446Bibliography,4509ALGORITHM-INDEPENDENTMACHINELEARNING4539.1Introduction,4539.2LackofInherentSuperiorityofAnyClassifier,4549.2.1NoFreeLunchTheorem,454Example1NoFreeLunchforBinaryData,457*9.2.2UglyDucklingTheorem,4589.2.3MinimumDescriptionLength(MDL),4619.2.4MinimumDescriptionLengthPrinciple,4639.2.50verfittingAvoidanceandOccam'sRazor,4649.3BiasandVariance,4659.3.1BiasandVarianceforRegression,4669.3.2BiasandVarianceforClassification,4689.4ResamplingforEstimatingStatistics,4719.4.1Jackknife,472Example2JackknifeEstimateofBiasandVarianceoftheMode,4739.4.2Bootstrap,4749.5ResamplingforClassifierDesign,4759.5.1Bagging,4759.5.2Boosting,4769.5.3LearningwithQueries,4809.5.4Arcing,LearningwithQueries,BiasandVariance,4829.6EstimatingandComparingClassifiers,4829.6.1ParametricModels,4839.6.2Cross-Validation,4839.6.3JackknifeandBootstrapEstimationofClassificationAccuracy,4859.6.4Maximum-LikelihoodModelComparison,4869.6.5BayesianModelComparison,4879.6.6TheProblem-AverageErrorRate,4899.6.7PredictingFinalPerformancefromLearningCurves,4929.6.8TheCapacityofaSeparatingPlane,4949.7CombiningClassifiers,4959.7.1ComponentClassifierswithDiscriminantFunctions,4969.7.2ComponentClassifierswithoutDiscriminantFunctions,498Summary,499BibliographicalandHistoricalRemarks,500Problems,502Computerexercises,508Bibliography,51310UNSUPERVISEDLEARNINGANDCLUSTERING51710.1Introduction,51710.2MixtureDensitiesandIdentifiability,51810.3Maximum-LikelihoodEstimates,51910.4ApplicationtoNormalMixtures,52110.4.1Case1:UnknownMeanVectors,52210.4.2Case2:AllParametersUnknown,52410.4.3k-MeansClustering,526*10.4.4Fuzzyk-MeansClustering,52810.5UnsupervisedBayesianLearning,53010.5.1TheBayesClassifier,53010.5.2LearningtheParameterVector,531Example1UnsupervisedLearningofGaussianData,53410.5.3Decision-DirectedApproximation,53610.6DataDescriptionandClustering,53710.6.1SimilarityMeasures,53810.7CriterionFunctionsforClustering,54210.7.1TheSum-of-Squared-ErrorCriterion,54210.7.2RelatedMinimumVarianceCriteria,54310.7.3ScatterCriteria,544Example2ClusteringCriteria,54610.8IterativeOptimization,54810.9HierarchicalClustering,55010.9.1Definitions,55110.9.2AgglomerativeHierarchicalClustering,55210.9.3Stepwise-OptimalHierarchicalClustering,55510.9.4HierarchicalClusteringandInducedMetrics,556*10.10TheProblemofValidity,557*10.11On-lineclustering,55910.11.1UnknownNumberofClusters,56110.11.2AdaptiveResonance,56310.11.3LearningwithaCritic,565'10.12Graph-TheoreticMethods,56610.13ComponentAnalysis,56810.13.1PrincipalComponentAnalysis(PCA),56810.13.2NonlinearComponentAnalysis(NLCA),569*10.13.3IndependentComponentAnalysis(ICA),57010.14Low-DimensionalRepresentationsandMultidimensionalScaling(MDS),57310.14.1Self-OrganizingFeatureMaps,57610.14.2ClusteringandDimensionalityReduction,580Summary,581BibliographicalandHistoricalRemarks,582Problems,583Computerexercises,593Bibliography,598AMATHEMATICALFOUNDATIONS601A.1Notation,601A.2LinearAlgebra,604A.2.1NotationandPreliminaries,604A.2.2InnerProduct,605A.2.3OuterProduct,606A.2.4DerivativesofMatrices,606A.2.5DeterminantandTrace,608A.2.6MatrixInversion,609A.2.7EigenvectorsandEigenvalues,609A.3LagrangeOptimization,610A.4ProbabilityTheory,611A.4.1DiscreteRandomVariables,611A.4.2ExpectedValues,611A.4.3PairsofDiscreteRandomVariables,612A.4.4StatisticalIndependence,613A.4.5ExpectedValuesofFunctionsofTwoVariables,613A.4.6ConditionalProbability,614A.4.7TheLawofTotalProbabilityandBayesRule,615A.4.8VectorRandomVariables,616A.4.9Expectations,MeanVectorsandCovarianceMatrices,617A.4.10ContinuousRandomVariables,618A.4.11DistributionsofSumsofIndependentRandomVariables,620A.4.12NormalDistributions,621A.5GaussianDerivativesandIntegrals,623A.5.1MultivariateNormalDensities,624A.5.2BivariateNormalDensities,626A.6HypothesisTesting,628A.6.1Chi-SquaredTest,629A.7InformationTheory,630A.7.1EntropyandInformation,630A.7.2RelativeEntropy,632A.7.3MutualInformation,632A.8ComputationalComplexity,633Bibliography,635INDEX
Pattern classification = 模式分类 / 2nd ed.
光盘服务联系方式: 020-38250260 客服QQ:4006604884
云图客服:
用户发送的提问,这种方式就需要有位在线客服来回答用户的问题,这种 就属于对话式的,问题是这种提问是否需要用户登录才能提问
Video Player
×
Audio Player
×
pdf Player
×