深度学习MLP/LeNet/AlexNet/GoogLeNet/ResNet在三个不同数据集上的分类效果实践腾讯云开发者社区

实验结果如下表所示模型在不同数据集上的准确度

MNIST

FashionMNIST

HWDB1

MLP

97.76%

87.73%

84.17%

LeNet

98.68%

85.82%

91.33%

AlexNet

98.91%

90.57%

89.67%

GoogLeNet

99.27%

90.27%

91.50%

ResNet

99.21%

91.35%

93.67%

1.激活函数使用relu

2.卷积之后引入标准化层(BN层)

3.使用了Dropout防止过拟合

inception模块结构图:

GoogLeNet创新点:

v1:引入了Inception结构,并使用1x1卷积和来压缩通道数(减少参数量。Inception作用:代替人工确定卷积层中的过滤器类型或者确定是否需要创建卷积层和池化层,即:不需要人为的决定使用哪个过滤器,是否需要池化层等,由网络自行决定这些参数,可以给网络添加所有可能值,将输出连接起来,网络自己学习它需要什么样的参数。

v2:引入BN层,BN作用:加速网络训练/防止梯度消失。

v3:(1)将Inception内部的BN层推广到外部。(2)优化了网络结构,将较大的二维卷积拆成两个较小的一维卷积,比如将3x3拆成1x3和3x1。这样节省了大量参数,加速运算,并增加了一层非线性表达能力。

v4:引入残差结构。

num_epochs=10batch_size=50learning_rate=0.001model=MLP()data_loader=MNISTLoader()optimizer=tf.keras.optimizers.Adam(learning_rate=learning_rate)num_batches=int(data_loader.num_train_data//batch_size)foreinrange(num_epochs):forbatch_indexinrange(num_batches):X,y=data_loader.get_batch(batch_size)withtf.GradientTape()astape:y_pred=model(X)loss=tf.keras.losses.sparse_categorical_crossentropy(y_true=y,y_pred=y_pred)loss=tf.reduce_mean(loss)grads=tape.gradient(loss,model.variables)optimizer.apply_gradients(grads_and_vars=zip(grads,model.variables))print("epoch%d:loss%f"%(e+1,loss.numpy()))测试——MLP-Mnist准确率:97.76%

sparse_categorical_accuracy=tf.keras.metrics.SparseCategoricalAccuracy()num_batches=int(data_loader.num_test_data//batch_size)forbatch_indexinrange(num_batches):start_index,end_index=batch_index*\batch_size,(batch_index+1)*batch_sizey_pred=model.predict(data_loader.test_data[start_index:end_index])sparse_categorical_accuracy.update_state(y_true=data_loader.test_label[start_index:end_index],y_pred=y_pred)print("testaccuracy:%f"%sparse_categorical_accuracy.result())代码语言:javascript复制testaccuracy:0.977600训练——MLP-Fashion_MNIST进行10轮训练

num_epochs=10batch_size=50learning_rate=0.001model=MLP()data_loader=Fashion_MNISTLoader()optimizer=tf.keras.optimizers.Adam(learning_rate=learning_rate)num_batches=int(data_loader.num_train_data//batch_size)foreinrange(num_epochs):print("epoch%d"%(e+1))forbatch_indexinrange(num_batches):X,y=data_loader.get_batch(batch_size)withtf.GradientTape()astape:y_pred=model(X)loss=tf.keras.losses.sparse_categorical_crossentropy(y_true=y,y_pred=y_pred)loss=tf.reduce_mean(loss)print("batch%d:loss%f"%(batch_index,loss.numpy()))grads=tape.gradient(loss,model.variables)optimizer.apply_gradients(grads_and_vars=zip(grads,model.variables))测试——MLP-Fashion_MNIST准确率:87.73%

sparse_categorical_accuracy=tf.keras.metrics.SparseCategoricalAccuracy()num_batches=int(data_loader.num_test_data//batch_size)forbatch_indexinrange(num_batches):start_index,end_index=batch_index*\batch_size,(batch_index+1)*batch_sizey_pred=model.predict(data_loader.test_data[start_index:end_index])sparse_categorical_accuracy.update_state(y_true=data_loader.test_label[start_index:end_index],y_pred=y_pred)print("testaccuracy:%f"%sparse_categorical_accuracy.result())代码语言:javascript复制testaccuracy:0.877300训练——MLP-HWDB1进行100轮训练

num_epochs=100batch_size=50learning_rate=0.001model=MLP()data_loader=HWDB1Loader()optimizer=tf.keras.optimizers.Adam(learning_rate=learning_rate)num_batches=int(data_loader.num_train_data//batch_size)foreinrange(num_epochs):forbatch_indexinrange(num_batches):X,y=data_loader.get_batch(batch_size)withtf.GradientTape()astape:y_pred=model(X)loss=tf.keras.losses.sparse_categorical_crossentropy(y_true=y,y_pred=y_pred)loss=tf.reduce_mean(loss)grads=tape.gradient(loss,model.variables)optimizer.apply_gradients(grads_and_vars=zip(grads,model.variables))print("epoch%d:loss%f"%(e+1,loss.numpy()))测试——MLP-HWDB1准确率:84.17%

sparse_categorical_accuracy=tf.keras.metrics.SparseCategoricalAccuracy()num_batches=int(data_loader.num_test_data//batch_size)forbatch_indexinrange(num_batches):start_index,end_index=batch_index*\batch_size,(batch_index+1)*batch_sizey_pred=model.predict(data_loader.test_data[start_index:end_index])sparse_categorical_accuracy.update_state(y_true=data_loader.test_label[start_index:end_index],y_pred=y_pred)print("testaccuracy:%f"%sparse_categorical_accuracy.result())代码语言:javascript复制testaccuracy:0.841667训练——LeNet-Mnist训练10轮

num_epochs=10batch_size=50learning_rate=0.001model=LeNet()data_loader=MNISTLoader()optimizer=tf.keras.optimizers.Adam(learning_rate=learning_rate)num_batches=int(data_loader.num_train_data//batch_size)foreinrange(num_epochs):forbatch_indexinrange(num_batches):X,y=data_loader.get_batch(batch_size)withtf.GradientTape()astape:y_pred=model(X)loss=tf.keras.losses.sparse_categorical_crossentropy(y_true=y,y_pred=y_pred)loss=tf.reduce_mean(loss)grads=tape.gradient(loss,model.variables)optimizer.apply_gradients(grads_and_vars=zip(grads,model.variables))print("epoch%d:loss%f"%(e+1,loss.numpy()))测试——LeNet-Mnist准确率:98.68%

sparse_categorical_accuracy=tf.keras.metrics.SparseCategoricalAccuracy()num_batches=int(data_loader.num_test_data//batch_size)forbatch_indexinrange(num_batches):start_index,end_index=batch_index*\batch_size,(batch_index+1)*batch_sizey_pred=model.predict(data_loader.test_data[start_index:end_index])sparse_categorical_accuracy.update_state(y_true=data_loader.test_label[start_index:end_index],y_pred=y_pred)print("testaccuracy:%f"%sparse_categorical_accuracy.result())代码语言:javascript复制testaccuracy:0.986800训练——LeNet-Fashion_MNIST进行10轮训练

num_epochs=10batch_size=50learning_rate=0.001model=LeNet()data_loader=Fashion_MNISTLoader()optimizer=tf.keras.optimizers.Adam(learning_rate=learning_rate)num_batches=int(data_loader.num_train_data//batch_size)foreinrange(num_epochs):forbatch_indexinrange(num_batches):X,y=data_loader.get_batch(batch_size)withtf.GradientTape()astape:y_pred=model(X)loss=tf.keras.losses.sparse_categorical_crossentropy(y_true=y,y_pred=y_pred)loss=tf.reduce_mean(loss)grads=tape.gradient(loss,model.variables)optimizer.apply_gradients(grads_and_vars=zip(grads,model.variables))print("epoch%d:loss%f"%(e+1,loss.numpy()))测试——LeNet-Fashion_MNIST准确率:85.82%

sparse_categorical_accuracy=tf.keras.metrics.SparseCategoricalAccuracy()num_batches=int(data_loader.num_test_data//batch_size)forbatch_indexinrange(num_batches):start_index,end_index=batch_index*\batch_size,(batch_index+1)*batch_sizey_pred=model.predict(data_loader.test_data[start_index:end_index])sparse_categorical_accuracy.update_state(y_true=data_loader.test_label[start_index:end_index],y_pred=y_pred)print("testaccuracy:%f"%sparse_categorical_accuracy.result())代码语言:javascript复制testaccuracy:0.858200训练——LeNet-HWDB1进行100轮训练

num_epochs=100batch_size=50learning_rate=0.001model=LeNet()data_loader=HWDB1Loader()optimizer=tf.keras.optimizers.Adam(learning_rate=learning_rate)num_batches=int(data_loader.num_train_data//batch_size)foreinrange(num_epochs):forbatch_indexinrange(num_batches):X,y=data_loader.get_batch(batch_size)withtf.GradientTape()astape:y_pred=model(X)loss=tf.keras.losses.sparse_categorical_crossentropy(y_true=y,y_pred=y_pred)loss=tf.reduce_mean(loss)grads=tape.gradient(loss,model.variables)optimizer.apply_gradients(grads_and_vars=zip(grads,model.variables))print("epoch%d:loss%f"%(e+1,loss.numpy()))测试——LeNet-HWDB1准确率:91.33%

sparse_categorical_accuracy=tf.keras.metrics.SparseCategoricalAccuracy()num_batches=int(data_loader.num_test_data//batch_size)forbatch_indexinrange(num_batches):start_index,end_index=batch_index*\batch_size,(batch_index+1)*batch_sizey_pred=model.predict(data_loader.test_data[start_index:end_index])sparse_categorical_accuracy.update_state(y_true=data_loader.test_label[start_index:end_index],y_pred=y_pred)print("testaccuracy:%f"%sparse_categorical_accuracy.result())代码语言:javascript复制testaccuracy:0.913333训练——AlexNet-Mnist进行10轮训练

num_epochs=10batch_size=50learning_rate=0.001model=AlexNet()data_loader=MNISTLoader()optimizer=tf.keras.optimizers.Adam(learning_rate=learning_rate)num_batches=int(data_loader.num_train_data//batch_size)foreinrange(num_epochs):forbatch_indexinrange(num_batches):X,y=data_loader.get_batch(batch_size)withtf.GradientTape()astape:y_pred=model(X)loss=tf.keras.losses.sparse_categorical_crossentropy(y_true=y,y_pred=y_pred)loss=tf.reduce_mean(loss)grads=tape.gradient(loss,model.variables)optimizer.apply_gradients(grads_and_vars=zip(grads,model.variables))print("epoch%d:loss%f"%(e+1,loss.numpy()))代码语言:javascript复制epoch1:loss0.067972epoch2:loss0.008088epoch3:loss0.002195epoch4:loss0.000097epoch5:loss0.008648epoch6:loss0.002917epoch7:loss0.024030epoch8:loss0.000092epoch9:loss0.003540epoch10:loss0.003601测试——AlexNet-Mnist准确率:98.91%

sparse_categorical_accuracy=tf.keras.metrics.SparseCategoricalAccuracy()num_batches=int(data_loader.num_test_data//batch_size)forbatch_indexinrange(num_batches):start_index,end_index=batch_index*\batch_size,(batch_index+1)*batch_sizey_pred=model.predict(data_loader.test_data[start_index:end_index])sparse_categorical_accuracy.update_state(y_true=data_loader.test_label[start_index:end_index],y_pred=y_pred)print("testaccuracy:%f"%sparse_categorical_accuracy.result())代码语言:javascript复制testaccuracy:0.989100训练——AlexNet-Fashion_MNIST进行10轮训练

num_epochs=10batch_size=50learning_rate=0.001model=AlexNet()data_loader=Fashion_MNISTLoader()optimizer=tf.keras.optimizers.Adam(learning_rate=learning_rate)num_batches=int(data_loader.num_train_data//batch_size)foreinrange(num_epochs):forbatch_indexinrange(num_batches):X,y=data_loader.get_batch(batch_size)withtf.GradientTape()astape:y_pred=model(X)loss=tf.keras.losses.sparse_categorical_crossentropy(y_true=y,y_pred=y_pred)loss=tf.reduce_mean(loss)grads=tape.gradient(loss,model.variables)optimizer.apply_gradients(grads_and_vars=zip(grads,model.variables))print("epoch%d:loss%f"%(e+1,loss.numpy()))代码语言:javascript复制epoch1:loss0.512883epoch2:loss0.372691epoch3:loss0.317794epoch4:loss0.209880epoch5:loss0.259238epoch6:loss0.246677epoch7:loss0.325049epoch8:loss0.085606epoch9:loss0.231111epoch10:loss0.108106测试——AlexNet-Fashion_MNIST准确率:90.57%

sparse_categorical_accuracy=tf.keras.metrics.SparseCategoricalAccuracy()num_batches=int(data_loader.num_test_data//batch_size)forbatch_indexinrange(num_batches):start_index,end_index=batch_index*\batch_size,(batch_index+1)*batch_sizey_pred=model.predict(data_loader.test_data[start_index:end_index])sparse_categorical_accuracy.update_state(y_true=data_loader.test_label[start_index:end_index],y_pred=y_pred)print("testaccuracy:%f"%sparse_categorical_accuracy.result())代码语言:javascript复制testaccuracy:0.905700训练——AlexNet-HWDB1进行10轮训练

num_epochs=10batch_size=50learning_rate=0.001model=AlexNet()data_loader=HWDB1Loader()optimizer=tf.keras.optimizers.Adam(learning_rate=learning_rate)num_batches=int(data_loader.num_train_data//batch_size)foreinrange(num_epochs):forbatch_indexinrange(num_batches):X,y=data_loader.get_batch(batch_size)withtf.GradientTape()astape:y_pred=model(X)loss=tf.keras.losses.sparse_categorical_crossentropy(y_true=y,y_pred=y_pred)loss=tf.reduce_mean(loss)grads=tape.gradient(loss,model.variables)optimizer.apply_gradients(grads_and_vars=zip(grads,model.variables))print("epoch%d:loss%f"%(e+1,loss.numpy()))代码语言:javascript复制epoch1:loss2.154262epoch2:loss1.550519epoch3:loss1.117614epoch4:loss0.796395epoch5:loss0.482055epoch6:loss0.261059epoch7:loss0.598217epoch8:loss0.408028epoch9:loss0.527755epoch10:loss0.232926测试——AlexNet-HWDB1准确率:89.67%

sparse_categorical_accuracy=tf.keras.metrics.SparseCategoricalAccuracy()num_batches=int(data_loader.num_test_data//batch_size)forbatch_indexinrange(num_batches):start_index,end_index=batch_index*\batch_size,(batch_index+1)*batch_sizey_pred=model.predict(data_loader.test_data[start_index:end_index])sparse_categorical_accuracy.update_state(y_true=data_loader.test_label[start_index:end_index],y_pred=y_pred)print("testaccuracy:%f"%sparse_categorical_accuracy.result())代码语言:javascript复制testaccuracy:0.896667训练——GoogLeNet-Mnist进行10轮训练

num_epochs=10batch_size=50learning_rate=0.001model=Inception(num_blocks=2,num_classes=10)data_loader=MNISTLoader()optimizer=tf.keras.optimizers.Adam(learning_rate=learning_rate)num_batches=int(data_loader.num_train_data//batch_size)foreinrange(num_epochs):forbatch_indexinrange(num_batches):X,y=data_loader.get_batch(batch_size)withtf.GradientTape()astape:y_pred=model(X)loss=tf.keras.losses.sparse_categorical_crossentropy(y_true=y,y_pred=y_pred)loss=tf.reduce_mean(loss)grads=tape.gradient(loss,model.variables)optimizer.apply_gradients(grads_and_vars=zip(grads,model.variables))print("epoch%d:loss%f"%(e+1,loss.numpy()))代码语言:javascript复制epoch1:loss0.226245epoch2:loss0.046722epoch3:loss0.043534epoch4:loss0.005478epoch5:loss0.005200epoch6:loss0.020319epoch7:loss0.087346epoch8:loss0.008595epoch9:loss0.035517epoch10:loss0.025345测试——GoogLeNet-Mnist准确率:99.27%

sparse_categorical_accuracy=tf.keras.metrics.SparseCategoricalAccuracy()num_batches=int(data_loader.num_test_data//batch_size)forbatch_indexinrange(num_batches):start_index,end_index=batch_index*\batch_size,(batch_index+1)*batch_sizey_pred=model.predict(data_loader.test_data[start_index:end_index])sparse_categorical_accuracy.update_state(y_true=data_loader.test_label[start_index:end_index],y_pred=y_pred)print("testaccuracy:%f"%sparse_categorical_accuracy.result())代码语言:javascript复制testaccuracy:0.992700训练——GoogLeNet-Fashion_MNIST进行10轮训练

num_epochs=10batch_size=50learning_rate=0.001model=Inception(num_blocks=2,num_classes=10)data_loader=Fashion_MNISTLoader()optimizer=tf.keras.optimizers.Adam(learning_rate=learning_rate)num_batches=int(data_loader.num_train_data//batch_size)foreinrange(num_epochs):forbatch_indexinrange(num_batches):X,y=data_loader.get_batch(batch_size)withtf.GradientTape()astape:y_pred=model(X)loss=tf.keras.losses.sparse_categorical_crossentropy(y_true=y,y_pred=y_pred)loss=tf.reduce_mean(loss)grads=tape.gradient(loss,model.variables)optimizer.apply_gradients(grads_and_vars=zip(grads,model.variables))print("epoch%d:loss%f"%(e+1,loss.numpy()))代码语言:javascript复制epoch1:loss0.555443epoch2:loss0.369227epoch3:loss0.424934epoch4:loss0.332159epoch5:loss0.254867epoch6:loss0.216396epoch7:loss0.330004epoch8:loss0.130716epoch9:loss0.225775epoch10:loss0.072902测试——GoogLeNet-Fashion_MNIST准确率:90.27%

sparse_categorical_accuracy=tf.keras.metrics.SparseCategoricalAccuracy()num_batches=int(data_loader.num_test_data//batch_size)forbatch_indexinrange(num_batches):start_index,end_index=batch_index*\batch_size,(batch_index+1)*batch_sizey_pred=model.predict(data_loader.test_data[start_index:end_index])sparse_categorical_accuracy.update_state(y_true=data_loader.test_label[start_index:end_index],y_pred=y_pred)print("testaccuracy:%f"%sparse_categorical_accuracy.result())代码语言:javascript复制testaccuracy:0.902700训练——GoogLeNet-HWDB1进行10轮训练

num_epochs=10batch_size=50learning_rate=0.001model=Inception(num_blocks=2,num_classes=10)data_loader=HWDB1Loader()optimizer=tf.keras.optimizers.Adam(learning_rate=learning_rate)num_batches=int(data_loader.num_train_data//batch_size)foreinrange(num_epochs):forbatch_indexinrange(num_batches):X,y=data_loader.get_batch(batch_size)withtf.GradientTape()astape:y_pred=model(X)loss=tf.keras.losses.sparse_categorical_crossentropy(y_true=y,y_pred=y_pred)loss=tf.reduce_mean(loss)grads=tape.gradient(loss,model.variables)optimizer.apply_gradients(grads_and_vars=zip(grads,model.variables))print("epoch%d:loss%f"%(e+1,loss.numpy()))代码语言:javascript复制epoch1:loss2.175095epoch2:loss1.401912epoch3:loss1.073104epoch4:loss0.582406epoch5:loss0.561393epoch6:loss0.342880epoch7:loss0.225490epoch8:loss0.534456epoch9:loss0.142884epoch10:loss0.083469测试——GoogLeNet-HWDB1准确率:91.50%

sparse_categorical_accuracy=tf.keras.metrics.SparseCategoricalAccuracy()num_batches=int(data_loader.num_test_data//batch_size)forbatch_indexinrange(num_batches):start_index,end_index=batch_index*\batch_size,(batch_index+1)*batch_sizey_pred=model.predict(data_loader.test_data[start_index:end_index])sparse_categorical_accuracy.update_state(y_true=data_loader.test_label[start_index:end_index],y_pred=y_pred)print("testaccuracy:%f"%sparse_categorical_accuracy.result())代码语言:javascript复制testaccuracy:0.915000训练——ResNet-Mnist进行10轮训练

num_epochs=10batch_size=50learning_rate=0.001model=ResNet18([2,2,2,2])data_loader=MNISTLoader()optimizer=tf.keras.optimizers.Adam(learning_rate=learning_rate)num_batches=int(data_loader.num_train_data//batch_size)foreinrange(num_epochs):forbatch_indexinrange(num_batches):X,y=data_loader.get_batch(batch_size)withtf.GradientTape()astape:y_pred=model(X)loss=tf.keras.losses.sparse_categorical_crossentropy(y_true=y,y_pred=y_pred)loss=tf.reduce_mean(loss)grads=tape.gradient(loss,model.variables)optimizer.apply_gradients(grads_and_vars=zip(grads,model.variables))print("epoch%d:loss%f"%(e+1,loss.numpy()))代码语言:javascript复制epoch1:loss0.009949epoch2:loss0.026236epoch3:loss0.016156epoch4:loss0.000139epoch5:loss0.003694epoch6:loss0.004449epoch7:loss0.001600epoch8:loss0.001913epoch9:loss0.001122epoch10:loss0.005298测试——ResNet-Mnist准确率:99.21%

sparse_categorical_accuracy=tf.keras.metrics.SparseCategoricalAccuracy()num_batches=int(data_loader.num_test_data//batch_size)forbatch_indexinrange(num_batches):start_index,end_index=batch_index*\batch_size,(batch_index+1)*batch_sizey_pred=model.predict(data_loader.test_data[start_index:end_index])sparse_categorical_accuracy.update_state(y_true=data_loader.test_label[start_index:end_index],y_pred=y_pred)print("testaccuracy:%f"%sparse_categorical_accuracy.result())代码语言:javascript复制testaccuracy:0.992100训练——ResNet-Fashion_MNIST进行10轮训练

num_epochs=10batch_size=50learning_rate=0.001model=ResNet18([2,2,2,2])data_loader=Fashion_MNISTLoader()optimizer=tf.keras.optimizers.Adam(learning_rate=learning_rate)num_batches=int(data_loader.num_train_data//batch_size)foreinrange(num_epochs):print("epoch%d"%(e+1))forbatch_indexinrange(num_batches):X,y=data_loader.get_batch(batch_size)withtf.GradientTape()astape:y_pred=model(X)loss=tf.keras.losses.sparse_categorical_crossentropy(y_true=y,y_pred=y_pred)loss=tf.reduce_mean(loss)print("batch%d:loss%f"%(batch_index,loss.numpy()))grads=tape.gradient(loss,model.variables)optimizer.apply_gradients(grads_and_vars=zip(grads,model.variables))测试——ResNet-Fashion_MNIST准确率:91.35%

sparse_categorical_accuracy=tf.keras.metrics.SparseCategoricalAccuracy()num_batches=int(data_loader.num_test_data//batch_size)forbatch_indexinrange(num_batches):start_index,end_index=batch_index*\batch_size,(batch_index+1)*batch_sizey_pred=model.predict(data_loader.test_data[start_index:end_index])sparse_categorical_accuracy.update_state(y_true=data_loader.test_label[start_index:end_index],y_pred=y_pred)print("testaccuracy:%f"%sparse_categorical_accuracy.result())代码语言:javascript复制testaccuracy:0.913500训练——ResNet-HWDB1进行10轮训练

num_epochs=10batch_size=50learning_rate=0.001model=ResNet18([2,2,2,2])data_loader=HWDB1Loader()optimizer=tf.keras.optimizers.Adam(learning_rate=learning_rate)num_batches=int(data_loader.num_train_data//batch_size)foreinrange(num_epochs):forbatch_indexinrange(num_batches):X,y=data_loader.get_batch(batch_size)withtf.GradientTape()astape:y_pred=model(X)loss=tf.keras.losses.sparse_categorical_crossentropy(y_true=y,y_pred=y_pred)loss=tf.reduce_mean(loss)grads=tape.gradient(loss,model.variables)optimizer.apply_gradients(grads_and_vars=zip(grads,model.variables))print("epoch%d:loss%f"%(e+1,loss.numpy()))代码语言:javascript复制epoch1:loss2.286184epoch2:loss1.170036epoch3:loss0.718102epoch4:loss0.766253epoch5:loss0.271660epoch6:loss0.121803epoch7:loss0.105720epoch8:loss0.029223epoch9:loss0.422725epoch10:loss0.005966

THE END
1.(casiahwdb)汉字识别数据集The online and offline Chinese handwriting databases, CASIA-OLHWDB and CASIA-HWDB, were built by the National Laboratory of Pattern Recognition (NLPR), Institute of Automation of Chinese Academy of Sciences (CASIA). The handwritten samples were produced by 1,020 writers using Anoto pen on papershttp://www.nlpr.ia.ac.cn/databases/handwriting/Home.html
2.keras+卷积神经网络HWDB手写汉字识别keras+卷积神经网络HWDB手写汉字识别 写在前面 HWDB手写汉字数据集来自于中科院自动化研究所,下载地址: http://www.nlpr.ia.ac.cn/databases/download/feature_data/HWDB1.1trn_gnt.zip http://www.nlpr.ia.ac.cn/databases/download/feature_data/HWDB1.1tst_gnt.ziphttps://blog.csdn.net/yql_617540298/article/details/82251994
3.celeba数据集CelebFaces Attributes Dataset (CelebA) is a large-scale face attributes dataset with more than 200K celebrity images, each with 40 attribute annotations. The images in this dataset cover large pose variations and background clutter. CelebA has large diversities, large quantities, and rich annotationshttp://mmlab.ie.cuhk.edu.hk/projects/CelebA.html
4.手写汉字数据集(部分)手写汉字数据集(HWDB1.1),图片形式的各种汉字以经分别在各个文件夹内存储好。 手写汉字 数据集2018-09-05 上传大小:42.00MB 所需:43积分/C币 CNN卷积神经网络识别手写汉字MNIST数据集.zip 这是我修改的别人的代码,别人的代码有点问题,我修改了一下,代码的正确率很高,可达90%以上,这是一个5层卷积神经网络的代https://www.iteye.com/resource/qq_27280237-10648261
5.Gbase8a数据库安装与使用HWDB-1.1 手写汉字CNN识别模型训练 数据集 使用CASIA-HWDB1.1进行训练和测试,训练集和测试集按照4:1划分,测试集235200张,训练集940800张, 共计1,176,000张图像。该数据集由300个人手写而成,其中包含171个阿拉伯数字和特殊符号,3755类GB2312-80 level-1汉字。 http://www.nlpr.ia.ac.cn/databases/handwriting/https://www.pianshen.com/article/7084303285/
6.基于机器学习的方法实现手写数据集识别系统手写字体识别数据集下载HWDB1.1数据集: 1. $ wget http://www.nlpr.ia.ac.cn/databases/download/feature_data/HWDB1.1trn_gnt.zip 2. # zip解压没得说, 之后还要解压alz压缩文件 3. $ wget http://www.nlpr.ia.ac.cn/databases/download/feature_data/HWDB1.1tst_gnt.zip https://blog.51cto.com/u_16213702/8807334
7.使用python获取CASIA脱机和在线手写汉字库CASIA-HWDB CASIA-OLHWDB 在申请书中介绍了数据集的基本情况: >CASIA-HWDB和CASIA-OLHWDB数据库由中科院自动化研究所在 2007-2010 年间收集, 均各自包含 1,020 人书写的脱机(联机)手写中文单字样本和手写文本, 用 Anoto 笔在点阵纸上书写后扫描、分割得到。 https://www.imooc.com/article/40759
8.CASIAHWDB脱机手写汉字数据集以及申请表下载我真的找遍全网,总算是找到了这个数据集,现在分享给大家。共六个文件,分别是CASIA-HWDB1.0训练集和测试集、CASIA-HWDB1.1训练集和测试集、CASIA-Competition数据集还有一张申请表。不过我看大多数人都是把前四个文件合并起来当做训练集,用Competition那个做测试集的。【注:2019年春节期间数据集的官网打不开,现在https://www.jianshu.com/p/980e2528e8fe