Recognized worldwide as one of the leading experts in artificial intelligence, Yoshua Bengio is most known for his pioneering work in deep learning, earning him the 2018 A.M. Turing Award, “the Nobel Prize of Computing,” with Geoffrey Hinton and Yann LeCun. He is a Full Professor at Université de Montréal, and the Founder and Scientific Director of Mila – Quebec AI Institute. He co-directs the CIFAR Learning in Machines & Brains program as Senior Fellow and acts as Scientific Director of IVADO.
In 2018, he collected the largest number of new citations in the world for a computer scientist and in 2019 was awarded the prestigious Killam Prize. He is a Fellow of both the Royal Society of London and Canada and Officer of the Order of Canada. Concerned about the social impact of AI and the objective that AI benefits all, he actively contributed to the Montreal Declaration for the Responsible Development of Artificial Intelligence.
Yoshua Bengio’s Listing of the top 20 most significant publications
- Ian Goodfellow, Yoshua Bengio, and Aaron Courville. Deep Learning. MIT Press, 2016.
- Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. “Neural Machine Translation by Jointly Learning to Align and Translate”. In: ICLR’2015, arXiv:1409.0473.
- Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. “Deep Learning”. In: Nature 7553 (2015), pp. 436–444.
- Yann Dauphin, Razvan Pascanu, Caglar Gulcehre, Kyunghyun Cho, Surya Ganguli, and Yoshua “Identifying and attacking the saddle point problem in high-dimensional non-convex opti- mization”. In: NIPS’2014. 2014.
- Guido F. Montufar, Razvan Pascanu, KyungHyun Cho, and Yoshua Bengio. “On the Number of Linear Regions of Deep Neural Networks”. In: NIPS’2014.
- Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua “Generative Adversarial Networks”. In: NIPS’2014. 2014.
- Razvan Pascanu, Guido Montufar, and Yoshua Bengio. “On the number of inference regions of deep feed forward networks with piece-wise linear activations”. In: ICLR’2014.
- Yoshua Bengio, Li Yao, Guillaume Alain, and Pascal “Generalized Denoising Auto-Encoders as Generative Models”. In: NIPS’2013. 2013.
- Glorot, A. Bordes, and Y. Bengio. “Deep Sparse Rectifier Neural Networks”. In: AISTATS’2011. 2011.
- Xavier Glorot and Yoshua Bengio. “Understanding the difficulty of training deep feedforward neural networks”. In: AISTATS’2010.
- Yoshua Bengio, Jerome Louradour, Ronan Collobert, and Jason “Curriculum Learning”. In: ICML’09. 2009.
- Yoshua “Learning deep architectures for AI”. In: Foundations and Trends in Machine Learn- ing 2.1 (2009), pp. 1–127.
- Pascal Vincent, Hugo Larochelle, Yoshua Bengio, and Pierre-Antoine “Extracting and Composing Robust Features with Denoising Autoencoders”. In: ICML’2008. 2008, pp. 1096–1103.
- Bengio, P. Lamblin, D. Popovici, and H. Larochelle. “Greedy Layer-Wise Training of Deep Net- works”. In: NIPS’2006. 2007.
- Yoshua Bengio, Olivier Delalleau, and Nicolas Le “The Curse of Highly Variable Functions for Local Kernel Machines”. In: NIPS’2005. 2006.
- Yoshua Bengio, Réjean Ducharme, Pascal Vincent, and Christian “A Neural Probabilistic Language Model”. In: Journal of Machine Learning Research 3 (2003), pp. 1137–1155.
- Yoshua Bengio and Samy “Modeling High-Dimensional Discrete Data with Multi-Layer Neural Networks”. In: NIPS’1999. MIT Press, 2000, pp. 400–406.
- Yann LeCun, Leon Bottou, Yoshua Bengio, and Patrick “Gradient-Based Learning Applied to Document Recognition”. In: Proceedings of the IEEE 86.11 (Nov. 1998), pp. 2278–2324.
- Bengio, P. Simard, and P. Frasconi. “Learning Long-Term Dependencies with Gradient Descent is Difficult”. In: IEEE Transactions on Neural Networks 5.2 (1994), pp. 157–166.
- Yoshua Bengio, Samy Bengio, Jocelyn Cloutier, and Jan “Learning a Synaptic Learning Rule”. In: IJCNN’1991. Seattle, WA, 1991, II–A969.