教师简介
 
邵红梅

»姓名:邵红梅

»系属:计算数学系

 

»学位:博士

»职称:副教授

»学科:数学

»导师类别:

»电子邮箱:hmshao@upc.edu.cn

»联系电话:

»通讯地址:山东省青岛市黄岛区长江西路66号(邮编:266580

»概况

◎研究方向

1.神经网络计算

 

◎学习与工作经历

1999.9-2002.7,烟台大学,理学学士;

2002.9-2007.1,大连理工大学,理学博士(硕博连读)

2007.3-2010.11,中国石油大学(华东),应用数学系,讲师;

2010.12-至今,中国石油大学(华东),计算数学系,副教授。

 

◎主讲课程

1.主讲本科生《数学分析》、《最优化方法》、《最优化原理》、《线性代数》等课程;

2.主讲研究生《现代数学选讲》、《最优化方法》课程。

 

◎承担和参与项目

1.近年来,主持的代表性科研项目:

1)邵红梅,中央高校基本科研业务费资助项目,2013-2014

2.近年来,参与的代表性科研项目:

(1)王健,邵红梅等,山东省自然科学基金面上项目,2018-2021

(2)王健,邵红梅等,国家自然科学基金青年项目,2014-2016

(3)王健,邵红梅等,山东省自然科学基金青年项目,2013-2016

 

◎获奖情况(除教师个人获奖之外,还包含指导学生获奖情况)

1. 获学校教学成果奖,厅局级,2021.

2. 获学校教学成果奖,厅局级,2019.

3. 指导学生获山东省大学生数学竞赛一等奖,省级,2021

4. 指导学生获全国大学生数学建模竞赛一等奖,省级,2009

 

◎论文

1.第一作者主要论文:

1H.M. Shao, J. Wang, D.P. Xu, L.J. Liu   and W.D. Bao. Relaxed conditions for convergence of

batch BPAP for feedforward neural network. Neurocomputing, 153: 174-179,   2015.

2H.M. Shao, D.P. Xu, G.F. Zheng and L.J.   Liu. Convergence of an online gradient method with

inner-product penalty and adaptive momentum. Neurocomputing, 77:   243-252, 2012.

3H.M. Shao, G.F. Zheng. Convergence   analysis of a back-propagation algorithm with adaptive

momentum. Neurocomputing,   74(5):   749-752, 2011. 

4H.M. Shao, D.P. Xu and G.F. Zheng.   Convergence of a batch gradient algorithm with adaptive

momentum for neural networks. Neural Process Letters, 34: 221-228, 2011.  

5H.M. Shao, G.F. Zheng. Boundedness and   convergence of online gradient method with penalty

and momentum, Neurocomputing,   74(5):   765-770, 2011. 

(6) 邵红梅, 安凤仙. 一类训练前馈神经网络的梯度算法及收敛性,中国石油大学学报(自然科学版), 4:

176-178, 2010. 

(7) H.M. Shao, W. Wu and L.J. Liu. Convergence of an online gradient   algorithm with penalty for

two-layer neural networks. Communications in Mathematical Research,   26(1): 67-75, 2010.

(8) H.M. Shao, G.F. Zheng. Construction of Bayesian classifiers with GA   for response modeling in

direct marketing, IEEE Int. Conf. on Computer Science and Information   Technology (ICCSIT 2009),

4: 89-92, 2009.

(9) H.M. Shao, G.F. Zheng. Convergence of a gradient algorithm with   penalty for training two-

layer neural networks, WRI Global Congress on Intelligent System (GCIS   2009), IEEE Computer Society Press, 4: 16-20, 2009.

(10) H.M. Shao, G.F. Zheng and F.X. An. Construction of Bayesian   classifiers with GA for

predicting customer retention, IEEE Int. Conf. on Natural Computation   (ICNC'08), 1: 181-185,

2008.

(11) H.M. Shao, W. Wu and F. Li. Convergence of BP algorithm for   training MLP with linear output,

高等学校计算数学学报英文版(Numerical Mathematics: Theory, Methods and Applications), 16(1):   193-

202, 2007.

(12) H.M. Shao, W. Wu, F. Li and G.F. Zheng. Convergence of batch   gradient algorithm for

feedforward neural network training. Journal of Information and   Computational Science, 4 (1):

251-255, 2007.

 2.第二作者(通讯作者)主要论文:

(1) L.J. Liu, H.M. Shao and D. Nan. Recurrent neural network model for   computing largest and smallest generalized eigenvalue, Neurocomputing,   71(16-18): 3589-3594, 2008.

(2) W. Wu, H.M. Shao and Z.X. Li. Convergence of batch BP algorithm with   penalty for FNN training. Lecture Notes in Computer Science, 4232: 562-569,   2006.