Bayesian optimization package
Sample a Gaussian process that is actively learned.
Sample a Gaussian process that is actively learned.
Published in arXiv, 2013
Benchmark of stochastic gradient descent and Nesterov’s accelerated gradient for text classification.
Recommended citation: Anderson de Andrade. (2013). "A comparison of neural network training methods for text classification." arXiv:1910.12674.
Download Paper
Published in arXiv, 2014
Evaluate the performance impact of optimization algorithms, activation functions, dropout, and maxout networks, in CNNs.
Recommended citation: Anderson de Andrade. (2014). "Best practices for convolutional neural networks applied to object recognition in images." arXiv:1910.13029.
Download Paper | Download Slides
Published in EMNLP Workshop on Noisy User-Generated Text, 2019
Sentence embeddings augmented by universal parts-of-speech tags evaluated on low-resource languages.
Recommended citation: Chen Liu, Anderson de Andrade, & Muhammad Osama. (2019). "Exploring multilingual syntactic sentence representations." EMNLP Workshop on Noisy User-Generated Text.
Download Paper
Published in arXiv, 2020
Graph representations using a learnable attention mechanism to sample the neighbourhood of a graph.
Recommended citation: Anderson de Andrade, & Chen Liu. (2020). "Graph representation learning network via adaptive sampling." arXiv:2006.04637.
Download Paper | Download Bibtex
Published in NAACL Conference on Human Language Technologies: Industry Papers, 2021
Unified batch and online transformer inference.
Recommended citation: Amir Ganiev, Colt Chapin, Anderson de Andrade, & Chen Liu. (2021). "An architecture for accelerated large-scale inference of transformer-based language models." NAACL Conference on Human Language Technologies: Industry Papers.
Download Paper | Download Bibtex
Published in ICME Workshop on Coding for Machines, 2023
A comparison between conditional and residual entropy codecs for a two-channel systems of tasks with nested information.
Recommended citation: Anderson de Andrade, Alon Harell, & Ivan Bajić. (2023). "Conditional and residual methods in scalable coding for humans and machines." ICME Workshop on Coding for Machines.
Download Paper | Download Slides | Download Bibtex
Published in ICME Workshop on Coding for Machines, 2024
Task reconstruction loss acts as a regularizer, increasing rate-distortion performance in coding for humans and machines.
Recommended citation: Anderson de Andrade, & Ivan Bajić. (2024). "Towards task-compatible compressible representations. ICME Workshop on Coding for Machines." ICME Workshop on Coding for Machines.
Download Paper | Download Slides | Download Bibtex
Published in IEEE TPAMI, 2025
Theoretical considerations and evaluation of split and distillation points.
Recommended citation: Alon Harell, Yalda Foroutan, Nilesh A. Ahuja, Parual Datta, Bhavya Kanzariya, V. Srinivasa Somayazulu, Omesh Tickoo, Anderson de Andrade, & Ivan V. Bajic. (2025). "Rate-distortion theory in coding for machines and its applications." IEEE TPAMI.
Download Paper | Download Bibtex
Published:
See the slides and more details about the publication here.
Published:
A six hour tutorial describing a toolkit for machine learning research. We go over many of the details of our suggested tools for development, deployment, and artifact management. We make emphasis on best practices and our philosophy.
Undergraduate course, Simon Fraser University, School of Engineering Science, 2025
Ran lab sessions, graded exams and assignments, held office hours, and assisted students with MATLAB programming and engineering concepts.